00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2443 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3704 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.011 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.012 The recommended git tool is: git 00:00:00.012 using credential 00000000-0000-0000-0000-000000000002 00:00:00.015 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.027 Fetching changes from the remote Git repository 00:00:00.035 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.050 Using shallow fetch with depth 1 00:00:00.050 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.050 > git --version # timeout=10 00:00:00.063 > git --version # 'git version 2.39.2' 00:00:00.063 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.080 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.080 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.960 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.972 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.984 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.984 > git config core.sparsecheckout # timeout=10 00:00:02.994 > git read-tree -mu HEAD # timeout=10 00:00:03.010 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:03.032 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:03.032 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:03.259 [Pipeline] Start of Pipeline 00:00:03.273 [Pipeline] library 00:00:03.275 Loading library shm_lib@master 00:00:03.275 Library shm_lib@master is cached. Copying from home. 00:00:03.291 [Pipeline] node 00:00:03.304 Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:03.305 [Pipeline] { 00:00:03.315 [Pipeline] catchError 00:00:03.316 [Pipeline] { 00:00:03.326 [Pipeline] wrap 00:00:03.334 [Pipeline] { 00:00:03.339 [Pipeline] stage 00:00:03.341 [Pipeline] { (Prologue) 00:00:03.358 [Pipeline] echo 00:00:03.359 Node: VM-host-SM9 00:00:03.365 [Pipeline] cleanWs 00:00:03.373 [WS-CLEANUP] Deleting project workspace... 00:00:03.373 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.378 [WS-CLEANUP] done 00:00:03.583 [Pipeline] setCustomBuildProperty 00:00:03.697 [Pipeline] httpRequest 00:00:04.115 [Pipeline] echo 00:00:04.116 Sorcerer 10.211.164.101 is alive 00:00:04.124 [Pipeline] retry 00:00:04.125 [Pipeline] { 00:00:04.135 [Pipeline] httpRequest 00:00:04.139 HttpMethod: GET 00:00:04.139 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.140 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.142 Response Code: HTTP/1.1 200 OK 00:00:04.142 Success: Status code 200 is in the accepted range: 200,404 00:00:04.143 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.287 [Pipeline] } 00:00:04.302 [Pipeline] // retry 00:00:04.308 [Pipeline] sh 00:00:04.591 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.606 [Pipeline] httpRequest 00:00:05.206 [Pipeline] echo 00:00:05.207 Sorcerer 10.211.164.101 is alive 00:00:05.214 [Pipeline] retry 00:00:05.215 [Pipeline] { 00:00:05.227 [Pipeline] httpRequest 00:00:05.231 HttpMethod: GET 00:00:05.232 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.232 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:05.233 Response Code: HTTP/1.1 200 OK 00:00:05.234 Success: Status code 200 is in the accepted range: 200,404 00:00:05.234 Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:20.571 [Pipeline] } 00:00:20.589 [Pipeline] // retry 00:00:20.596 [Pipeline] sh 00:00:20.875 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:23.424 [Pipeline] sh 00:00:23.706 + git -C spdk log --oneline -n5 00:00:23.707 c13c99a5e test: Various fixes for Fedora40 00:00:23.707 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:00:23.707 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:00:23.707 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:00:23.707 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:00:23.727 [Pipeline] writeFile 00:00:23.743 [Pipeline] sh 00:00:24.026 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:24.039 [Pipeline] sh 00:00:24.322 + cat autorun-spdk.conf 00:00:24.322 SPDK_TEST_UNITTEST=1 00:00:24.322 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.322 SPDK_TEST_NVME=1 00:00:24.322 SPDK_TEST_BLOCKDEV=1 00:00:24.322 SPDK_RUN_ASAN=1 00:00:24.322 SPDK_RUN_UBSAN=1 00:00:24.322 SPDK_TEST_RAID5=1 00:00:24.322 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.332 RUN_NIGHTLY=1 00:00:24.334 [Pipeline] } 00:00:24.349 [Pipeline] // stage 00:00:24.365 [Pipeline] stage 00:00:24.367 [Pipeline] { (Run VM) 00:00:24.380 [Pipeline] sh 00:00:24.663 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:24.663 + echo 'Start stage prepare_nvme.sh' 00:00:24.663 Start stage prepare_nvme.sh 00:00:24.663 + [[ -n 1 ]] 00:00:24.663 + disk_prefix=ex1 00:00:24.663 + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]] 00:00:24.663 + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]] 00:00:24.663 + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf 00:00:24.663 ++ SPDK_TEST_UNITTEST=1 00:00:24.663 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:24.663 ++ SPDK_TEST_NVME=1 00:00:24.663 ++ SPDK_TEST_BLOCKDEV=1 00:00:24.663 ++ SPDK_RUN_ASAN=1 00:00:24.663 ++ SPDK_RUN_UBSAN=1 00:00:24.663 ++ SPDK_TEST_RAID5=1 00:00:24.663 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:24.663 ++ RUN_NIGHTLY=1 00:00:24.663 + cd /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:24.663 + nvme_files=() 00:00:24.663 + declare -A nvme_files 00:00:24.663 + backend_dir=/var/lib/libvirt/images/backends 00:00:24.663 + nvme_files['nvme.img']=5G 00:00:24.663 + nvme_files['nvme-cmb.img']=5G 00:00:24.663 + nvme_files['nvme-multi0.img']=4G 00:00:24.663 + nvme_files['nvme-multi1.img']=4G 00:00:24.663 + nvme_files['nvme-multi2.img']=4G 00:00:24.663 + nvme_files['nvme-openstack.img']=8G 00:00:24.663 + nvme_files['nvme-zns.img']=5G 00:00:24.663 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:24.663 + (( SPDK_TEST_FTL == 1 )) 00:00:24.663 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:24.663 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:00:24.663 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:00:24.663 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:00:24.663 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:00:24.663 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:00:24.663 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.663 + for nvme in "${!nvme_files[@]}" 00:00:24.663 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:00:24.921 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:24.921 + for nvme in "${!nvme_files[@]}" 00:00:24.921 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:00:24.921 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:24.921 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:00:24.921 + echo 'End stage prepare_nvme.sh' 00:00:24.921 End stage prepare_nvme.sh 00:00:24.931 [Pipeline] sh 00:00:25.209 + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:25.209 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2404 00:00:25.468 00:00:25.468 DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant 00:00:25.468 SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk 00:00:25.468 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest 00:00:25.468 HELP=0 00:00:25.468 DRY_RUN=0 00:00:25.468 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:00:25.468 NVME_DISKS_TYPE=nvme, 00:00:25.468 NVME_AUTO_CREATE=0 00:00:25.468 NVME_DISKS_NAMESPACES=, 00:00:25.468 NVME_CMB=, 00:00:25.468 NVME_PMR=, 00:00:25.468 NVME_ZNS=, 00:00:25.468 NVME_MS=, 00:00:25.468 NVME_FDP=, 00:00:25.468 SPDK_VAGRANT_DISTRO=ubuntu2404 00:00:25.468 SPDK_VAGRANT_VMCPU=10 00:00:25.468 SPDK_VAGRANT_VMRAM=12288 00:00:25.468 SPDK_VAGRANT_PROVIDER=libvirt 00:00:25.468 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:25.468 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:25.468 SPDK_OPENSTACK_NETWORK=0 00:00:25.468 VAGRANT_PACKAGE_BOX=0 00:00:25.468 VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:25.468 FORCE_DISTRO=true 00:00:25.468 VAGRANT_BOX_VERSION= 00:00:25.468 EXTRA_VAGRANTFILES= 00:00:25.468 NIC_MODEL=e1000 00:00:25.468 00:00:25.468 mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt' 00:00:25.468 /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest 00:00:28.754 Bringing machine 'default' up with 'libvirt' provider... 00:00:28.754 ==> default: Creating image (snapshot of base box volume). 00:00:29.015 ==> default: Creating domain with the following settings... 00:00:29.015 ==> default: -- Name: ubuntu2404-24.04-1720510786-2314_default_1733520169_ff6d853ccde366694ae9 00:00:29.015 ==> default: -- Domain type: kvm 00:00:29.015 ==> default: -- Cpus: 10 00:00:29.015 ==> default: -- Feature: acpi 00:00:29.015 ==> default: -- Feature: apic 00:00:29.015 ==> default: -- Feature: pae 00:00:29.015 ==> default: -- Memory: 12288M 00:00:29.015 ==> default: -- Memory Backing: hugepages: 00:00:29.015 ==> default: -- Management MAC: 00:00:29.015 ==> default: -- Loader: 00:00:29.015 ==> default: -- Nvram: 00:00:29.015 ==> default: -- Base box: spdk/ubuntu2404 00:00:29.015 ==> default: -- Storage pool: default 00:00:29.015 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1733520169_ff6d853ccde366694ae9.img (20G) 00:00:29.015 ==> default: -- Volume Cache: default 00:00:29.015 ==> default: -- Kernel: 00:00:29.015 ==> default: -- Initrd: 00:00:29.015 ==> default: -- Graphics Type: vnc 00:00:29.015 ==> default: -- Graphics Port: -1 00:00:29.015 ==> default: -- Graphics IP: 127.0.0.1 00:00:29.015 ==> default: -- Graphics Password: Not defined 00:00:29.015 ==> default: -- Video Type: cirrus 00:00:29.015 ==> default: -- Video VRAM: 9216 00:00:29.015 ==> default: -- Sound Type: 00:00:29.015 ==> default: -- Keymap: en-us 00:00:29.015 ==> default: -- TPM Path: 00:00:29.015 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:29.015 ==> default: -- Command line args: 00:00:29.015 ==> default: -> value=-device, 00:00:29.015 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:00:29.015 ==> default: -> value=-drive, 00:00:29.015 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:00:29.015 ==> default: -> value=-device, 00:00:29.015 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:29.015 ==> default: Creating shared folders metadata... 00:00:29.015 ==> default: Starting domain. 00:00:30.397 ==> default: Waiting for domain to get an IP address... 00:00:40.373 ==> default: Waiting for SSH to become available... 00:00:41.309 ==> default: Configuring and enabling network interfaces... 00:00:46.583 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:00:51.855 ==> default: Mounting SSHFS shared folder... 00:00:52.424 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output 00:00:52.424 ==> default: Checking Mount.. 00:00:53.362 ==> default: Folder Successfully Mounted! 00:00:53.362 ==> default: Running provisioner: file... 00:00:53.630 default: ~/.gitconfig => .gitconfig 00:00:53.889 00:00:53.889 SUCCESS! 00:00:53.889 00:00:53.889 cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use. 00:00:53.889 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:00:53.889 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm. 00:00:53.889 00:00:53.898 [Pipeline] } 00:00:53.915 [Pipeline] // stage 00:00:53.925 [Pipeline] dir 00:00:53.926 Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt 00:00:53.928 [Pipeline] { 00:00:53.941 [Pipeline] catchError 00:00:53.942 [Pipeline] { 00:00:53.956 [Pipeline] sh 00:00:54.239 + vagrant ssh-config --host vagrant 00:00:54.239 + sed -ne /^Host/,$p 00:00:54.239 + tee ssh_conf 00:00:57.532 Host vagrant 00:00:57.532 HostName 192.168.121.79 00:00:57.532 User vagrant 00:00:57.532 Port 22 00:00:57.532 UserKnownHostsFile /dev/null 00:00:57.532 StrictHostKeyChecking no 00:00:57.532 PasswordAuthentication no 00:00:57.532 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404 00:00:57.532 IdentitiesOnly yes 00:00:57.532 LogLevel FATAL 00:00:57.532 ForwardAgent yes 00:00:57.532 ForwardX11 yes 00:00:57.532 00:00:57.546 [Pipeline] withEnv 00:00:57.549 [Pipeline] { 00:00:57.561 [Pipeline] sh 00:00:57.840 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:00:57.840 source /etc/os-release 00:00:57.840 [[ -e /image.version ]] && img=$(< /image.version) 00:00:57.840 # Minimal, systemd-like check. 00:00:57.840 if [[ -e /.dockerenv ]]; then 00:00:57.840 # Clear garbage from the node's name: 00:00:57.840 # agt-er_autotest_547-896 -> autotest_547-896 00:00:57.840 # $HOSTNAME is the actual container id 00:00:57.840 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:00:57.840 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:00:57.840 # We can assume this is a mount from a host where container is running, 00:00:57.840 # so fetch its hostname to easily identify the target swarm worker. 00:00:57.840 container="$(< /etc/hostname) ($agent)" 00:00:57.840 else 00:00:57.840 # Fallback 00:00:57.840 container=$agent 00:00:57.840 fi 00:00:57.840 fi 00:00:57.840 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:00:57.840 00:00:58.114 [Pipeline] } 00:00:58.129 [Pipeline] // withEnv 00:00:58.137 [Pipeline] setCustomBuildProperty 00:00:58.150 [Pipeline] stage 00:00:58.152 [Pipeline] { (Tests) 00:00:58.168 [Pipeline] sh 00:00:58.449 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:00:58.721 [Pipeline] sh 00:00:59.001 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:00:59.273 [Pipeline] timeout 00:00:59.274 Timeout set to expire in 1 hr 30 min 00:00:59.275 [Pipeline] { 00:00:59.290 [Pipeline] sh 00:00:59.570 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:00.139 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:01:00.152 [Pipeline] sh 00:01:00.433 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:00.709 [Pipeline] sh 00:01:00.990 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:01.266 [Pipeline] sh 00:01:01.547 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo 00:01:01.806 ++ readlink -f spdk_repo 00:01:01.806 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:01.806 + [[ -n /home/vagrant/spdk_repo ]] 00:01:01.806 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:01.806 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:01.806 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:01.806 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:01.806 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:01.806 + [[ ubuntu24-vg-autotest == pkgdep-* ]] 00:01:01.806 + cd /home/vagrant/spdk_repo 00:01:01.806 + source /etc/os-release 00:01:01.806 ++ PRETTY_NAME='Ubuntu 24.04 LTS' 00:01:01.806 ++ NAME=Ubuntu 00:01:01.806 ++ VERSION_ID=24.04 00:01:01.806 ++ VERSION='24.04 LTS (Noble Numbat)' 00:01:01.806 ++ VERSION_CODENAME=noble 00:01:01.806 ++ ID=ubuntu 00:01:01.806 ++ ID_LIKE=debian 00:01:01.806 ++ HOME_URL=https://www.ubuntu.com/ 00:01:01.806 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:01.806 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:01.806 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:01.806 ++ UBUNTU_CODENAME=noble 00:01:01.806 ++ LOGO=ubuntu-logo 00:01:01.806 + uname -a 00:01:01.806 Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 00:01:01.806 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:01.806 Hugepages 00:01:01.807 node hugesize free / total 00:01:01.807 node0 1048576kB 0 / 0 00:01:01.807 node0 2048kB 0 / 0 00:01:01.807 00:01:01.807 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:02.085 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:02.085 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:02.085 + rm -f /tmp/spdk-ld-path 00:01:02.085 + source autorun-spdk.conf 00:01:02.085 ++ SPDK_TEST_UNITTEST=1 00:01:02.085 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.085 ++ SPDK_TEST_NVME=1 00:01:02.085 ++ SPDK_TEST_BLOCKDEV=1 00:01:02.085 ++ SPDK_RUN_ASAN=1 00:01:02.085 ++ SPDK_RUN_UBSAN=1 00:01:02.085 ++ SPDK_TEST_RAID5=1 00:01:02.085 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.085 ++ RUN_NIGHTLY=1 00:01:02.085 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:02.085 + [[ -n '' ]] 00:01:02.085 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:02.085 + for M in /var/spdk/build-*-manifest.txt 00:01:02.085 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:02.085 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:02.085 + for M in /var/spdk/build-*-manifest.txt 00:01:02.085 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:02.085 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:02.085 ++ uname 00:01:02.085 + [[ Linux == \L\i\n\u\x ]] 00:01:02.085 + sudo dmesg -T 00:01:02.085 + sudo dmesg --clear 00:01:02.085 + dmesg_pid=2352 00:01:02.085 + [[ Ubuntu == FreeBSD ]] 00:01:02.085 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.085 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:02.085 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:02.085 + sudo dmesg -Tw 00:01:02.085 + [[ -x /usr/src/fio-static/fio ]] 00:01:02.085 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:02.085 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:02.085 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:02.085 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:02.085 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:02.085 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:02.085 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:02.085 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:02.085 Test configuration: 00:01:02.085 SPDK_TEST_UNITTEST=1 00:01:02.085 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.085 SPDK_TEST_NVME=1 00:01:02.085 SPDK_TEST_BLOCKDEV=1 00:01:02.085 SPDK_RUN_ASAN=1 00:01:02.085 SPDK_RUN_UBSAN=1 00:01:02.085 SPDK_TEST_RAID5=1 00:01:02.085 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.085 RUN_NIGHTLY=1 21:23:21 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:02.085 21:23:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:02.085 21:23:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:02.085 21:23:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:02.085 21:23:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:02.085 21:23:21 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:02.085 21:23:21 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:02.085 21:23:21 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:02.085 21:23:21 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:02.085 21:23:21 -- paths/export.sh@6 -- $ export PATH 00:01:02.085 21:23:21 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:01:02.085 21:23:21 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:02.085 21:23:21 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:02.085 21:23:21 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733520201.XXXXXX 00:01:02.085 21:23:21 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733520201.AqztD5 00:01:02.085 21:23:21 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:02.085 21:23:21 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:01:02.085 21:23:21 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:02.085 21:23:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:02.085 21:23:21 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:02.085 21:23:21 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:02.085 21:23:21 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:02.085 21:23:21 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 21:23:22 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:02.382 21:23:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.382 21:23:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.382 21:23:22 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:02.382 21:23:22 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.382 Fri Dec 6 21:23:22 UTC 2024 00:01:02.382 21:23:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.382 LTS-67-gc13c99a5e 00:01:02.382 21:23:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:02.382 21:23:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:02.382 21:23:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:02.382 21:23:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:02.382 21:23:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 ************************************ 00:01:02.382 START TEST asan 00:01:02.382 ************************************ 00:01:02.382 using asan 00:01:02.382 21:23:22 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:01:02.382 00:01:02.382 real 0m0.000s 00:01:02.382 user 0m0.000s 00:01:02.382 sys 0m0.000s 00:01:02.382 21:23:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:02.382 ************************************ 00:01:02.382 END TEST asan 00:01:02.382 ************************************ 00:01:02.382 21:23:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 21:23:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.382 21:23:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.382 21:23:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:02.382 21:23:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:02.382 21:23:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 ************************************ 00:01:02.382 START TEST ubsan 00:01:02.382 ************************************ 00:01:02.382 using ubsan 00:01:02.382 21:23:22 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:02.382 00:01:02.382 real 0m0.000s 00:01:02.382 user 0m0.000s 00:01:02.382 sys 0m0.000s 00:01:02.382 21:23:22 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:02.382 21:23:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 ************************************ 00:01:02.382 END TEST ubsan 00:01:02.382 ************************************ 00:01:02.382 21:23:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.382 21:23:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.382 21:23:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.382 21:23:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.382 21:23:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.382 21:23:22 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:02.382 21:23:22 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:02.382 21:23:22 -- common/autobuild_common.sh@416 -- $ run_test unittest_build _unittest_build 00:01:02.382 21:23:22 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:02.382 21:23:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:02.382 21:23:22 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.382 ************************************ 00:01:02.382 START TEST unittest_build 00:01:02.382 ************************************ 00:01:02.382 21:23:22 -- common/autotest_common.sh@1114 -- $ _unittest_build 00:01:02.382 21:23:22 -- common/autobuild_common.sh@407 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --without-shared 00:01:02.382 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:02.382 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:02.965 Using 'verbs' RDMA provider 00:01:18.787 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:01:31.007 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:31.007 Creating mk/config.mk...done. 00:01:31.007 Creating mk/cc.flags.mk...done. 00:01:31.007 Type 'make' to build. 00:01:31.007 21:23:50 -- common/autobuild_common.sh@408 -- $ make -j10 00:01:31.007 make[1]: Nothing to be done for 'all'. 00:01:45.896 The Meson build system 00:01:45.896 Version: 1.4.1 00:01:45.896 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:01:45.896 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:01:45.896 Build type: native build 00:01:45.896 Program cat found: YES (/usr/bin/cat) 00:01:45.896 Project name: DPDK 00:01:45.896 Project version: 23.11.0 00:01:45.896 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:01:45.896 C linker for the host machine: cc ld.bfd 2.42 00:01:45.896 Host machine cpu family: x86_64 00:01:45.896 Host machine cpu: x86_64 00:01:45.896 Message: ## Building in Developer Mode ## 00:01:45.896 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:45.896 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:01:45.896 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:45.896 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:01:45.896 Program cat found: YES (/usr/bin/cat) 00:01:45.896 Compiler for C supports arguments -march=native: YES 00:01:45.896 Checking for size of "void *" : 8 00:01:45.896 Checking for size of "void *" : 8 (cached) 00:01:45.896 Library m found: YES 00:01:45.896 Library numa found: YES 00:01:45.896 Has header "numaif.h" : YES 00:01:45.896 Library fdt found: NO 00:01:45.896 Library execinfo found: NO 00:01:45.896 Has header "execinfo.h" : YES 00:01:45.896 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:01:45.896 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:45.896 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:45.896 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:45.896 Run-time dependency openssl found: YES 3.0.13 00:01:45.896 Run-time dependency libpcap found: NO (tried pkgconfig) 00:01:45.896 Library pcap found: NO 00:01:45.896 Compiler for C supports arguments -Wcast-qual: YES 00:01:45.896 Compiler for C supports arguments -Wdeprecated: YES 00:01:45.896 Compiler for C supports arguments -Wformat: YES 00:01:45.896 Compiler for C supports arguments -Wformat-nonliteral: YES 00:01:45.896 Compiler for C supports arguments -Wformat-security: YES 00:01:45.896 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:45.897 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:45.897 Compiler for C supports arguments -Wnested-externs: YES 00:01:45.897 Compiler for C supports arguments -Wold-style-definition: YES 00:01:45.897 Compiler for C supports arguments -Wpointer-arith: YES 00:01:45.897 Compiler for C supports arguments -Wsign-compare: YES 00:01:45.897 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:45.897 Compiler for C supports arguments -Wundef: YES 00:01:45.897 Compiler for C supports arguments -Wwrite-strings: YES 00:01:45.897 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:45.897 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:45.897 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:45.897 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:45.897 Program objdump found: YES (/usr/bin/objdump) 00:01:45.897 Compiler for C supports arguments -mavx512f: YES 00:01:45.897 Checking if "AVX512 checking" compiles: YES 00:01:45.897 Fetching value of define "__SSE4_2__" : 1 00:01:45.897 Fetching value of define "__AES__" : 1 00:01:45.897 Fetching value of define "__AVX__" : 1 00:01:45.897 Fetching value of define "__AVX2__" : 1 00:01:45.897 Fetching value of define "__AVX512BW__" : (undefined) 00:01:45.897 Fetching value of define "__AVX512CD__" : (undefined) 00:01:45.897 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:45.897 Fetching value of define "__AVX512F__" : (undefined) 00:01:45.897 Fetching value of define "__AVX512VL__" : (undefined) 00:01:45.897 Fetching value of define "__PCLMUL__" : 1 00:01:45.897 Fetching value of define "__RDRND__" : 1 00:01:45.897 Fetching value of define "__RDSEED__" : 1 00:01:45.897 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:45.897 Fetching value of define "__znver1__" : (undefined) 00:01:45.897 Fetching value of define "__znver2__" : (undefined) 00:01:45.897 Fetching value of define "__znver3__" : (undefined) 00:01:45.897 Fetching value of define "__znver4__" : (undefined) 00:01:45.897 Library asan found: YES 00:01:45.897 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:45.897 Message: lib/log: Defining dependency "log" 00:01:45.897 Message: lib/kvargs: Defining dependency "kvargs" 00:01:45.897 Message: lib/telemetry: Defining dependency "telemetry" 00:01:45.897 Library rt found: YES 00:01:45.897 Checking for function "getentropy" : NO 00:01:45.897 Message: lib/eal: Defining dependency "eal" 00:01:45.897 Message: lib/ring: Defining dependency "ring" 00:01:45.897 Message: lib/rcu: Defining dependency "rcu" 00:01:45.897 Message: lib/mempool: Defining dependency "mempool" 00:01:45.897 Message: lib/mbuf: Defining dependency "mbuf" 00:01:45.897 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:45.897 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:45.897 Compiler for C supports arguments -mpclmul: YES 00:01:45.897 Compiler for C supports arguments -maes: YES 00:01:45.897 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:45.897 Compiler for C supports arguments -mavx512bw: YES 00:01:45.897 Compiler for C supports arguments -mavx512dq: YES 00:01:45.897 Compiler for C supports arguments -mavx512vl: YES 00:01:45.897 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:45.897 Compiler for C supports arguments -mavx2: YES 00:01:45.897 Compiler for C supports arguments -mavx: YES 00:01:45.897 Message: lib/net: Defining dependency "net" 00:01:45.897 Message: lib/meter: Defining dependency "meter" 00:01:45.897 Message: lib/ethdev: Defining dependency "ethdev" 00:01:45.897 Message: lib/pci: Defining dependency "pci" 00:01:45.897 Message: lib/cmdline: Defining dependency "cmdline" 00:01:45.897 Message: lib/hash: Defining dependency "hash" 00:01:45.897 Message: lib/timer: Defining dependency "timer" 00:01:45.897 Message: lib/compressdev: Defining dependency "compressdev" 00:01:45.897 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:45.897 Message: lib/dmadev: Defining dependency "dmadev" 00:01:45.897 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:45.897 Message: lib/power: Defining dependency "power" 00:01:45.897 Message: lib/reorder: Defining dependency "reorder" 00:01:45.897 Message: lib/security: Defining dependency "security" 00:01:45.897 Has header "linux/userfaultfd.h" : YES 00:01:45.897 Has header "linux/vduse.h" : YES 00:01:45.897 Message: lib/vhost: Defining dependency "vhost" 00:01:45.897 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:45.897 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:45.897 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:45.897 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:45.897 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:45.897 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:45.897 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:45.897 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:45.897 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:45.897 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:45.897 Program doxygen found: YES (/usr/bin/doxygen) 00:01:45.897 Configuring doxy-api-html.conf using configuration 00:01:45.897 Configuring doxy-api-man.conf using configuration 00:01:45.897 Program mandb found: YES (/usr/bin/mandb) 00:01:45.897 Program sphinx-build found: NO 00:01:45.897 Configuring rte_build_config.h using configuration 00:01:45.897 Message: 00:01:45.897 ================= 00:01:45.897 Applications Enabled 00:01:45.897 ================= 00:01:45.897 00:01:45.897 apps: 00:01:45.897 00:01:45.897 00:01:45.897 Message: 00:01:45.897 ================= 00:01:45.897 Libraries Enabled 00:01:45.897 ================= 00:01:45.897 00:01:45.897 libs: 00:01:45.897 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:45.897 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:45.897 cryptodev, dmadev, power, reorder, security, vhost, 00:01:45.897 00:01:45.897 Message: 00:01:45.897 =============== 00:01:45.897 Drivers Enabled 00:01:45.897 =============== 00:01:45.897 00:01:45.897 common: 00:01:45.897 00:01:45.897 bus: 00:01:45.897 pci, vdev, 00:01:45.897 mempool: 00:01:45.897 ring, 00:01:45.897 dma: 00:01:45.897 00:01:45.897 net: 00:01:45.897 00:01:45.897 crypto: 00:01:45.897 00:01:45.897 compress: 00:01:45.897 00:01:45.897 vdpa: 00:01:45.897 00:01:45.897 00:01:45.897 Message: 00:01:45.897 ================= 00:01:45.897 Content Skipped 00:01:45.897 ================= 00:01:45.897 00:01:45.897 apps: 00:01:45.897 dumpcap: explicitly disabled via build config 00:01:45.897 graph: explicitly disabled via build config 00:01:45.897 pdump: explicitly disabled via build config 00:01:45.897 proc-info: explicitly disabled via build config 00:01:45.897 test-acl: explicitly disabled via build config 00:01:45.897 test-bbdev: explicitly disabled via build config 00:01:45.897 test-cmdline: explicitly disabled via build config 00:01:45.897 test-compress-perf: explicitly disabled via build config 00:01:45.897 test-crypto-perf: explicitly disabled via build config 00:01:45.897 test-dma-perf: explicitly disabled via build config 00:01:45.897 test-eventdev: explicitly disabled via build config 00:01:45.897 test-fib: explicitly disabled via build config 00:01:45.897 test-flow-perf: explicitly disabled via build config 00:01:45.897 test-gpudev: explicitly disabled via build config 00:01:45.897 test-mldev: explicitly disabled via build config 00:01:45.897 test-pipeline: explicitly disabled via build config 00:01:45.897 test-pmd: explicitly disabled via build config 00:01:45.897 test-regex: explicitly disabled via build config 00:01:45.897 test-sad: explicitly disabled via build config 00:01:45.897 test-security-perf: explicitly disabled via build config 00:01:45.897 00:01:45.897 libs: 00:01:45.897 metrics: explicitly disabled via build config 00:01:45.897 acl: explicitly disabled via build config 00:01:45.897 bbdev: explicitly disabled via build config 00:01:45.897 bitratestats: explicitly disabled via build config 00:01:45.897 bpf: explicitly disabled via build config 00:01:45.897 cfgfile: explicitly disabled via build config 00:01:45.897 distributor: explicitly disabled via build config 00:01:45.897 efd: explicitly disabled via build config 00:01:45.897 eventdev: explicitly disabled via build config 00:01:45.897 dispatcher: explicitly disabled via build config 00:01:45.897 gpudev: explicitly disabled via build config 00:01:45.897 gro: explicitly disabled via build config 00:01:45.897 gso: explicitly disabled via build config 00:01:45.897 ip_frag: explicitly disabled via build config 00:01:45.897 jobstats: explicitly disabled via build config 00:01:45.897 latencystats: explicitly disabled via build config 00:01:45.897 lpm: explicitly disabled via build config 00:01:45.897 member: explicitly disabled via build config 00:01:45.897 pcapng: explicitly disabled via build config 00:01:45.897 rawdev: explicitly disabled via build config 00:01:45.897 regexdev: explicitly disabled via build config 00:01:45.897 mldev: explicitly disabled via build config 00:01:45.897 rib: explicitly disabled via build config 00:01:45.897 sched: explicitly disabled via build config 00:01:45.897 stack: explicitly disabled via build config 00:01:45.897 ipsec: explicitly disabled via build config 00:01:45.897 pdcp: explicitly disabled via build config 00:01:45.897 fib: explicitly disabled via build config 00:01:45.897 port: explicitly disabled via build config 00:01:45.897 pdump: explicitly disabled via build config 00:01:45.897 table: explicitly disabled via build config 00:01:45.897 pipeline: explicitly disabled via build config 00:01:45.897 graph: explicitly disabled via build config 00:01:45.897 node: explicitly disabled via build config 00:01:45.897 00:01:45.897 drivers: 00:01:45.897 common/cpt: not in enabled drivers build config 00:01:45.897 common/dpaax: not in enabled drivers build config 00:01:45.897 common/iavf: not in enabled drivers build config 00:01:45.897 common/idpf: not in enabled drivers build config 00:01:45.897 common/mvep: not in enabled drivers build config 00:01:45.897 common/octeontx: not in enabled drivers build config 00:01:45.898 bus/auxiliary: not in enabled drivers build config 00:01:45.898 bus/cdx: not in enabled drivers build config 00:01:45.898 bus/dpaa: not in enabled drivers build config 00:01:45.898 bus/fslmc: not in enabled drivers build config 00:01:45.898 bus/ifpga: not in enabled drivers build config 00:01:45.898 bus/platform: not in enabled drivers build config 00:01:45.898 bus/vmbus: not in enabled drivers build config 00:01:45.898 common/cnxk: not in enabled drivers build config 00:01:45.898 common/mlx5: not in enabled drivers build config 00:01:45.898 common/nfp: not in enabled drivers build config 00:01:45.898 common/qat: not in enabled drivers build config 00:01:45.898 common/sfc_efx: not in enabled drivers build config 00:01:45.898 mempool/bucket: not in enabled drivers build config 00:01:45.898 mempool/cnxk: not in enabled drivers build config 00:01:45.898 mempool/dpaa: not in enabled drivers build config 00:01:45.898 mempool/dpaa2: not in enabled drivers build config 00:01:45.898 mempool/octeontx: not in enabled drivers build config 00:01:45.898 mempool/stack: not in enabled drivers build config 00:01:45.898 dma/cnxk: not in enabled drivers build config 00:01:45.898 dma/dpaa: not in enabled drivers build config 00:01:45.898 dma/dpaa2: not in enabled drivers build config 00:01:45.898 dma/hisilicon: not in enabled drivers build config 00:01:45.898 dma/idxd: not in enabled drivers build config 00:01:45.898 dma/ioat: not in enabled drivers build config 00:01:45.898 dma/skeleton: not in enabled drivers build config 00:01:45.898 net/af_packet: not in enabled drivers build config 00:01:45.898 net/af_xdp: not in enabled drivers build config 00:01:45.898 net/ark: not in enabled drivers build config 00:01:45.898 net/atlantic: not in enabled drivers build config 00:01:45.898 net/avp: not in enabled drivers build config 00:01:45.898 net/axgbe: not in enabled drivers build config 00:01:45.898 net/bnx2x: not in enabled drivers build config 00:01:45.898 net/bnxt: not in enabled drivers build config 00:01:45.898 net/bonding: not in enabled drivers build config 00:01:45.898 net/cnxk: not in enabled drivers build config 00:01:45.898 net/cpfl: not in enabled drivers build config 00:01:45.898 net/cxgbe: not in enabled drivers build config 00:01:45.898 net/dpaa: not in enabled drivers build config 00:01:45.898 net/dpaa2: not in enabled drivers build config 00:01:45.898 net/e1000: not in enabled drivers build config 00:01:45.898 net/ena: not in enabled drivers build config 00:01:45.898 net/enetc: not in enabled drivers build config 00:01:45.898 net/enetfec: not in enabled drivers build config 00:01:45.898 net/enic: not in enabled drivers build config 00:01:45.898 net/failsafe: not in enabled drivers build config 00:01:45.898 net/fm10k: not in enabled drivers build config 00:01:45.898 net/gve: not in enabled drivers build config 00:01:45.898 net/hinic: not in enabled drivers build config 00:01:45.898 net/hns3: not in enabled drivers build config 00:01:45.898 net/i40e: not in enabled drivers build config 00:01:45.898 net/iavf: not in enabled drivers build config 00:01:45.898 net/ice: not in enabled drivers build config 00:01:45.898 net/idpf: not in enabled drivers build config 00:01:45.898 net/igc: not in enabled drivers build config 00:01:45.898 net/ionic: not in enabled drivers build config 00:01:45.898 net/ipn3ke: not in enabled drivers build config 00:01:45.898 net/ixgbe: not in enabled drivers build config 00:01:45.898 net/mana: not in enabled drivers build config 00:01:45.898 net/memif: not in enabled drivers build config 00:01:45.898 net/mlx4: not in enabled drivers build config 00:01:45.898 net/mlx5: not in enabled drivers build config 00:01:45.898 net/mvneta: not in enabled drivers build config 00:01:45.898 net/mvpp2: not in enabled drivers build config 00:01:45.898 net/netvsc: not in enabled drivers build config 00:01:45.898 net/nfb: not in enabled drivers build config 00:01:45.898 net/nfp: not in enabled drivers build config 00:01:45.898 net/ngbe: not in enabled drivers build config 00:01:45.898 net/null: not in enabled drivers build config 00:01:45.898 net/octeontx: not in enabled drivers build config 00:01:45.898 net/octeon_ep: not in enabled drivers build config 00:01:45.898 net/pcap: not in enabled drivers build config 00:01:45.898 net/pfe: not in enabled drivers build config 00:01:45.898 net/qede: not in enabled drivers build config 00:01:45.898 net/ring: not in enabled drivers build config 00:01:45.898 net/sfc: not in enabled drivers build config 00:01:45.898 net/softnic: not in enabled drivers build config 00:01:45.898 net/tap: not in enabled drivers build config 00:01:45.898 net/thunderx: not in enabled drivers build config 00:01:45.898 net/txgbe: not in enabled drivers build config 00:01:45.898 net/vdev_netvsc: not in enabled drivers build config 00:01:45.898 net/vhost: not in enabled drivers build config 00:01:45.898 net/virtio: not in enabled drivers build config 00:01:45.898 net/vmxnet3: not in enabled drivers build config 00:01:45.898 raw/*: missing internal dependency, "rawdev" 00:01:45.898 crypto/armv8: not in enabled drivers build config 00:01:45.898 crypto/bcmfs: not in enabled drivers build config 00:01:45.898 crypto/caam_jr: not in enabled drivers build config 00:01:45.898 crypto/ccp: not in enabled drivers build config 00:01:45.898 crypto/cnxk: not in enabled drivers build config 00:01:45.898 crypto/dpaa_sec: not in enabled drivers build config 00:01:45.898 crypto/dpaa2_sec: not in enabled drivers build config 00:01:45.898 crypto/ipsec_mb: not in enabled drivers build config 00:01:45.898 crypto/mlx5: not in enabled drivers build config 00:01:45.898 crypto/mvsam: not in enabled drivers build config 00:01:45.898 crypto/nitrox: not in enabled drivers build config 00:01:45.898 crypto/null: not in enabled drivers build config 00:01:45.898 crypto/octeontx: not in enabled drivers build config 00:01:45.898 crypto/openssl: not in enabled drivers build config 00:01:45.898 crypto/scheduler: not in enabled drivers build config 00:01:45.898 crypto/uadk: not in enabled drivers build config 00:01:45.898 crypto/virtio: not in enabled drivers build config 00:01:45.898 compress/isal: not in enabled drivers build config 00:01:45.898 compress/mlx5: not in enabled drivers build config 00:01:45.898 compress/octeontx: not in enabled drivers build config 00:01:45.898 compress/zlib: not in enabled drivers build config 00:01:45.898 regex/*: missing internal dependency, "regexdev" 00:01:45.898 ml/*: missing internal dependency, "mldev" 00:01:45.898 vdpa/ifc: not in enabled drivers build config 00:01:45.898 vdpa/mlx5: not in enabled drivers build config 00:01:45.898 vdpa/nfp: not in enabled drivers build config 00:01:45.898 vdpa/sfc: not in enabled drivers build config 00:01:45.898 event/*: missing internal dependency, "eventdev" 00:01:45.898 baseband/*: missing internal dependency, "bbdev" 00:01:45.898 gpu/*: missing internal dependency, "gpudev" 00:01:45.898 00:01:45.898 00:01:45.898 Build targets in project: 85 00:01:45.898 00:01:45.898 DPDK 23.11.0 00:01:45.898 00:01:45.898 User defined options 00:01:45.898 buildtype : debug 00:01:45.898 default_library : static 00:01:45.898 libdir : lib 00:01:45.898 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:45.898 b_sanitize : address 00:01:45.898 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:45.898 c_link_args : 00:01:45.898 cpu_instruction_set: native 00:01:45.898 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:01:45.898 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:01:45.898 enable_docs : false 00:01:45.898 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:45.898 enable_kmods : false 00:01:45.898 tests : false 00:01:45.898 00:01:45.898 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:01:45.898 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:01:45.898 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:45.898 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:45.898 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:45.898 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:45.898 [5/265] Linking static target lib/librte_kvargs.a 00:01:45.898 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:45.898 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:45.898 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:45.898 [9/265] Linking static target lib/librte_log.a 00:01:45.898 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:45.898 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.898 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:45.898 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:45.898 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:45.898 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:45.898 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:45.898 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.898 [18/265] Linking static target lib/librte_telemetry.a 00:01:45.898 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:45.898 [20/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.898 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:45.898 [22/265] Linking target lib/librte_log.so.24.0 00:01:45.898 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:45.898 [24/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:45.898 [25/265] Linking target lib/librte_kvargs.so.24.0 00:01:45.898 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:45.898 [27/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:45.898 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:45.898 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:45.899 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.899 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:45.899 [32/265] Linking target lib/librte_telemetry.so.24.0 00:01:45.899 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:45.899 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:45.899 [35/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:45.899 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:45.899 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:45.899 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:45.899 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.156 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:46.156 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:46.156 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:46.156 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.156 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.156 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:46.156 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:46.414 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:46.414 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:46.414 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:46.414 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:46.672 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.672 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:46.672 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:46.672 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:46.672 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:46.931 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:46.931 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:46.931 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:46.931 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:46.931 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:46.931 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:46.931 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.189 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.189 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:47.189 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:47.189 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:47.189 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.448 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.448 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:47.448 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:47.448 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:47.448 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.706 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.706 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.706 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:47.706 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.706 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:47.706 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:47.964 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:47.964 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:47.964 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:47.964 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.221 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.221 [84/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.221 [85/265] Linking static target lib/librte_eal.a 00:01:48.221 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.221 [87/265] Linking static target lib/librte_ring.a 00:01:48.480 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.480 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.480 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.480 [91/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.480 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.480 [93/265] Linking static target lib/librte_mempool.a 00:01:48.480 [94/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.480 [95/265] Linking static target lib/librte_rcu.a 00:01:48.738 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.738 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.738 [98/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.998 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.998 [100/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.998 [101/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.257 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.257 [103/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.257 [104/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.257 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:49.257 [106/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:49.257 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.257 [108/265] Linking static target lib/librte_net.a 00:01:49.516 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:49.516 [110/265] Linking static target lib/librte_mbuf.a 00:01:49.516 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:49.516 [112/265] Linking static target lib/librte_meter.a 00:01:49.516 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.774 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.774 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.774 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.774 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:50.033 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:50.033 [119/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.292 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.292 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:50.551 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:50.551 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:50.551 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.551 [125/265] Linking static target lib/librte_pci.a 00:01:50.810 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:50.810 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:50.810 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.810 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.810 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:50.810 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:50.810 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:51.069 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.069 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.069 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.069 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.069 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.069 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.069 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.069 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.069 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.328 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:51.328 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:51.328 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:51.588 [145/265] Linking static target lib/librte_cmdline.a 00:01:51.588 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:51.588 [147/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.847 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:51.847 [149/265] Linking static target lib/librte_timer.a 00:01:51.847 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:52.106 [151/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:52.106 [152/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.106 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:52.106 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:52.106 [155/265] Linking static target lib/librte_ethdev.a 00:01:52.364 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:52.364 [157/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.364 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:52.364 [159/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:52.364 [160/265] Linking static target lib/librte_compressdev.a 00:01:52.622 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:52.622 [162/265] Linking static target lib/librte_hash.a 00:01:52.622 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:52.622 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:52.622 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:52.622 [166/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:52.881 [167/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:52.881 [168/265] Linking static target lib/librte_dmadev.a 00:01:52.881 [169/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:52.881 [170/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.881 [171/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:53.140 [172/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.140 [173/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.399 [174/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:53.399 [175/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:53.399 [176/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:53.399 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:53.399 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:53.658 [179/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:53.658 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:53.658 [181/265] Linking static target lib/librte_power.a 00:01:53.917 [182/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:53.917 [183/265] Linking static target lib/librte_cryptodev.a 00:01:54.176 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:54.176 [185/265] Linking static target lib/librte_reorder.a 00:01:54.176 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:54.176 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:54.176 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:54.176 [189/265] Linking static target lib/librte_security.a 00:01:54.435 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:54.435 [191/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.435 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.694 [193/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.694 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.694 [195/265] Linking target lib/librte_eal.so.24.0 00:01:54.694 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:54.952 [197/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.952 [198/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:54.952 [199/265] Linking target lib/librte_ring.so.24.0 00:01:54.952 [200/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:55.209 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.209 [202/265] Linking target lib/librte_rcu.so.24.0 00:01:55.209 [203/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.209 [204/265] Linking target lib/librte_mempool.so.24.0 00:01:55.209 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.209 [206/265] Linking target lib/librte_meter.so.24.0 00:01:55.209 [207/265] Linking target lib/librte_pci.so.24.0 00:01:55.209 [208/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:55.209 [209/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:55.209 [210/265] Linking target lib/librte_timer.so.24.0 00:01:55.209 [211/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:55.209 [212/265] Linking target lib/librte_mbuf.so.24.0 00:01:55.209 [213/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:55.209 [214/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.466 [215/265] Linking target lib/librte_dmadev.so.24.0 00:01:55.466 [216/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:55.466 [217/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:55.466 [218/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.466 [219/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:55.466 [220/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.466 [221/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:55.466 [222/265] Linking target lib/librte_net.so.24.0 00:01:55.466 [223/265] Linking target lib/librte_compressdev.so.24.0 00:01:55.466 [224/265] Linking target lib/librte_cryptodev.so.24.0 00:01:55.466 [225/265] Linking target lib/librte_reorder.so.24.0 00:01:55.723 [226/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:55.723 [227/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:55.723 [228/265] Linking target lib/librte_cmdline.so.24.0 00:01:55.723 [229/265] Linking target lib/librte_hash.so.24.0 00:01:55.723 [230/265] Linking target lib/librte_security.so.24.0 00:01:55.723 [231/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:55.981 [232/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.981 [233/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.981 [234/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.981 [235/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.238 [236/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.238 [237/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.238 [238/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.238 [239/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.238 [240/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.238 [241/265] Linking static target drivers/librte_bus_vdev.a 00:01:56.238 [242/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.496 [243/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.496 [244/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.496 [245/265] Linking static target drivers/librte_bus_pci.a 00:01:56.496 [246/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.496 [247/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.496 [248/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.496 [249/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:56.755 [250/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:56.755 [251/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.755 [252/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.755 [253/265] Linking static target drivers/librte_mempool_ring.a 00:01:56.755 [254/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:57.013 [255/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.013 [256/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:57.580 [257/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.580 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:57.580 [259/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:57.843 [260/265] Linking target lib/librte_power.so.24.0 00:01:58.106 [261/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:01.389 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:01.389 [263/265] Linking static target lib/librte_vhost.a 00:02:03.297 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.297 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:03.297 INFO: autodetecting backend as ninja 00:02:03.297 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:04.235 CC lib/ut_mock/mock.o 00:02:04.235 CC lib/ut/ut.o 00:02:04.235 CC lib/log/log.o 00:02:04.235 CC lib/log/log_flags.o 00:02:04.235 CC lib/log/log_deprecated.o 00:02:04.235 LIB libspdk_ut_mock.a 00:02:04.494 LIB libspdk_ut.a 00:02:04.494 LIB libspdk_log.a 00:02:04.494 CXX lib/trace_parser/trace.o 00:02:04.494 CC lib/dma/dma.o 00:02:04.494 CC lib/ioat/ioat.o 00:02:04.494 CC lib/util/base64.o 00:02:04.494 CC lib/util/bit_array.o 00:02:04.494 CC lib/util/cpuset.o 00:02:04.494 CC lib/util/crc16.o 00:02:04.494 CC lib/util/crc32.o 00:02:04.494 CC lib/util/crc32c.o 00:02:04.494 CC lib/vfio_user/host/vfio_user_pci.o 00:02:04.755 CC lib/util/crc32_ieee.o 00:02:04.755 CC lib/util/crc64.o 00:02:04.755 CC lib/util/dif.o 00:02:04.755 LIB libspdk_dma.a 00:02:04.755 CC lib/util/fd.o 00:02:04.755 CC lib/util/file.o 00:02:04.755 CC lib/util/hexlify.o 00:02:04.755 CC lib/util/iov.o 00:02:04.755 CC lib/util/math.o 00:02:05.013 CC lib/util/pipe.o 00:02:05.013 CC lib/util/strerror_tls.o 00:02:05.013 LIB libspdk_ioat.a 00:02:05.013 CC lib/util/string.o 00:02:05.013 CC lib/util/uuid.o 00:02:05.013 CC lib/vfio_user/host/vfio_user.o 00:02:05.013 CC lib/util/fd_group.o 00:02:05.013 CC lib/util/xor.o 00:02:05.013 CC lib/util/zipf.o 00:02:05.270 LIB libspdk_vfio_user.a 00:02:05.528 LIB libspdk_util.a 00:02:05.787 CC lib/conf/conf.o 00:02:05.787 CC lib/env_dpdk/env.o 00:02:05.787 CC lib/env_dpdk/memory.o 00:02:05.787 CC lib/env_dpdk/pci.o 00:02:05.787 CC lib/env_dpdk/init.o 00:02:05.787 CC lib/rdma/common.o 00:02:05.787 CC lib/json/json_parse.o 00:02:05.787 CC lib/idxd/idxd.o 00:02:05.787 CC lib/vmd/vmd.o 00:02:05.787 LIB libspdk_trace_parser.a 00:02:06.046 CC lib/vmd/led.o 00:02:06.046 LIB libspdk_conf.a 00:02:06.046 CC lib/json/json_util.o 00:02:06.046 CC lib/json/json_write.o 00:02:06.046 CC lib/rdma/rdma_verbs.o 00:02:06.046 CC lib/idxd/idxd_user.o 00:02:06.046 CC lib/env_dpdk/threads.o 00:02:06.304 CC lib/env_dpdk/pci_ioat.o 00:02:06.304 CC lib/env_dpdk/pci_virtio.o 00:02:06.304 LIB libspdk_rdma.a 00:02:06.304 CC lib/env_dpdk/pci_vmd.o 00:02:06.304 CC lib/env_dpdk/pci_idxd.o 00:02:06.304 CC lib/env_dpdk/pci_event.o 00:02:06.304 LIB libspdk_json.a 00:02:06.304 CC lib/idxd/idxd_kernel.o 00:02:06.304 CC lib/env_dpdk/sigbus_handler.o 00:02:06.562 CC lib/env_dpdk/pci_dpdk.o 00:02:06.562 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:06.562 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:06.562 CC lib/jsonrpc/jsonrpc_server.o 00:02:06.562 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:06.562 LIB libspdk_idxd.a 00:02:06.562 CC lib/jsonrpc/jsonrpc_client.o 00:02:06.562 LIB libspdk_vmd.a 00:02:06.562 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:06.821 LIB libspdk_jsonrpc.a 00:02:07.079 CC lib/rpc/rpc.o 00:02:07.336 LIB libspdk_rpc.a 00:02:07.336 CC lib/trace/trace_flags.o 00:02:07.336 CC lib/trace/trace_rpc.o 00:02:07.336 CC lib/notify/notify.o 00:02:07.336 CC lib/trace/trace.o 00:02:07.336 CC lib/sock/sock.o 00:02:07.336 CC lib/notify/notify_rpc.o 00:02:07.336 CC lib/sock/sock_rpc.o 00:02:07.594 LIB libspdk_notify.a 00:02:07.594 LIB libspdk_env_dpdk.a 00:02:07.594 LIB libspdk_trace.a 00:02:07.852 CC lib/thread/iobuf.o 00:02:07.852 CC lib/thread/thread.o 00:02:07.852 LIB libspdk_sock.a 00:02:08.111 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:08.111 CC lib/nvme/nvme_fabric.o 00:02:08.111 CC lib/nvme/nvme_ctrlr.o 00:02:08.111 CC lib/nvme/nvme_ns_cmd.o 00:02:08.111 CC lib/nvme/nvme_qpair.o 00:02:08.111 CC lib/nvme/nvme_pcie_common.o 00:02:08.111 CC lib/nvme/nvme_pcie.o 00:02:08.111 CC lib/nvme/nvme_ns.o 00:02:08.111 CC lib/nvme/nvme.o 00:02:09.060 CC lib/nvme/nvme_quirks.o 00:02:09.060 CC lib/nvme/nvme_transport.o 00:02:09.060 CC lib/nvme/nvme_discovery.o 00:02:09.060 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:09.060 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:09.060 CC lib/nvme/nvme_tcp.o 00:02:09.318 CC lib/nvme/nvme_opal.o 00:02:09.318 CC lib/nvme/nvme_io_msg.o 00:02:09.576 CC lib/nvme/nvme_poll_group.o 00:02:09.835 CC lib/nvme/nvme_zns.o 00:02:09.835 CC lib/nvme/nvme_cuse.o 00:02:09.835 LIB libspdk_thread.a 00:02:09.835 CC lib/nvme/nvme_vfio_user.o 00:02:09.835 CC lib/nvme/nvme_rdma.o 00:02:10.093 CC lib/accel/accel.o 00:02:10.093 CC lib/blob/blobstore.o 00:02:10.093 CC lib/blob/request.o 00:02:10.660 CC lib/init/json_config.o 00:02:10.660 CC lib/virtio/virtio.o 00:02:10.660 CC lib/virtio/virtio_vhost_user.o 00:02:10.660 CC lib/virtio/virtio_vfio_user.o 00:02:10.919 CC lib/init/subsystem.o 00:02:10.919 CC lib/virtio/virtio_pci.o 00:02:10.919 CC lib/accel/accel_rpc.o 00:02:10.919 CC lib/accel/accel_sw.o 00:02:10.919 CC lib/blob/zeroes.o 00:02:10.919 CC lib/blob/blob_bs_dev.o 00:02:10.919 CC lib/init/subsystem_rpc.o 00:02:11.177 CC lib/init/rpc.o 00:02:11.177 LIB libspdk_virtio.a 00:02:11.177 LIB libspdk_init.a 00:02:11.436 CC lib/event/reactor.o 00:02:11.436 CC lib/event/app.o 00:02:11.436 CC lib/event/log_rpc.o 00:02:11.436 CC lib/event/app_rpc.o 00:02:11.436 CC lib/event/scheduler_static.o 00:02:11.436 LIB libspdk_accel.a 00:02:11.695 CC lib/bdev/bdev.o 00:02:11.695 CC lib/bdev/bdev_zone.o 00:02:11.695 CC lib/bdev/part.o 00:02:11.695 CC lib/bdev/bdev_rpc.o 00:02:11.695 CC lib/bdev/scsi_nvme.o 00:02:11.695 LIB libspdk_nvme.a 00:02:11.953 LIB libspdk_event.a 00:02:14.490 LIB libspdk_blob.a 00:02:14.490 CC lib/lvol/lvol.o 00:02:14.490 CC lib/blobfs/blobfs.o 00:02:14.490 CC lib/blobfs/tree.o 00:02:15.424 LIB libspdk_bdev.a 00:02:15.424 CC lib/scsi/dev.o 00:02:15.424 CC lib/ublk/ublk.o 00:02:15.424 CC lib/nbd/nbd.o 00:02:15.424 CC lib/scsi/scsi.o 00:02:15.424 CC lib/scsi/port.o 00:02:15.424 CC lib/scsi/lun.o 00:02:15.424 CC lib/nvmf/ctrlr.o 00:02:15.424 CC lib/ftl/ftl_core.o 00:02:15.683 CC lib/ftl/ftl_init.o 00:02:15.683 CC lib/nvmf/ctrlr_discovery.o 00:02:15.683 LIB libspdk_blobfs.a 00:02:15.683 CC lib/nvmf/ctrlr_bdev.o 00:02:15.683 LIB libspdk_lvol.a 00:02:15.683 CC lib/scsi/scsi_bdev.o 00:02:15.683 CC lib/scsi/scsi_pr.o 00:02:15.941 CC lib/scsi/scsi_rpc.o 00:02:15.941 CC lib/scsi/task.o 00:02:15.941 CC lib/nbd/nbd_rpc.o 00:02:15.941 CC lib/ftl/ftl_layout.o 00:02:15.941 CC lib/ftl/ftl_debug.o 00:02:16.199 CC lib/ftl/ftl_io.o 00:02:16.199 LIB libspdk_nbd.a 00:02:16.199 CC lib/ftl/ftl_sb.o 00:02:16.199 CC lib/ftl/ftl_l2p.o 00:02:16.199 CC lib/nvmf/subsystem.o 00:02:16.199 CC lib/ublk/ublk_rpc.o 00:02:16.199 CC lib/ftl/ftl_l2p_flat.o 00:02:16.457 CC lib/ftl/ftl_nv_cache.o 00:02:16.457 LIB libspdk_scsi.a 00:02:16.457 CC lib/ftl/ftl_band.o 00:02:16.457 CC lib/ftl/ftl_band_ops.o 00:02:16.457 CC lib/nvmf/nvmf.o 00:02:16.457 CC lib/ftl/ftl_writer.o 00:02:16.457 LIB libspdk_ublk.a 00:02:16.457 CC lib/ftl/ftl_rq.o 00:02:16.457 CC lib/iscsi/conn.o 00:02:16.457 CC lib/nvmf/nvmf_rpc.o 00:02:16.713 CC lib/nvmf/transport.o 00:02:16.713 CC lib/ftl/ftl_reloc.o 00:02:16.713 CC lib/iscsi/init_grp.o 00:02:16.970 CC lib/iscsi/iscsi.o 00:02:17.228 CC lib/nvmf/tcp.o 00:02:17.228 CC lib/nvmf/rdma.o 00:02:17.228 CC lib/iscsi/md5.o 00:02:17.484 CC lib/iscsi/param.o 00:02:17.484 CC lib/iscsi/portal_grp.o 00:02:17.484 CC lib/ftl/ftl_l2p_cache.o 00:02:17.484 CC lib/iscsi/tgt_node.o 00:02:17.760 CC lib/iscsi/iscsi_subsystem.o 00:02:17.760 CC lib/vhost/vhost.o 00:02:17.760 CC lib/vhost/vhost_rpc.o 00:02:18.018 CC lib/iscsi/iscsi_rpc.o 00:02:18.018 CC lib/iscsi/task.o 00:02:18.276 CC lib/vhost/vhost_scsi.o 00:02:18.276 CC lib/ftl/ftl_p2l.o 00:02:18.276 CC lib/ftl/mngt/ftl_mngt.o 00:02:18.276 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:18.276 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:18.532 CC lib/vhost/vhost_blk.o 00:02:18.532 CC lib/vhost/rte_vhost_user.o 00:02:18.532 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:18.532 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:18.532 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:18.532 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:18.789 LIB libspdk_iscsi.a 00:02:18.789 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:18.789 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:18.789 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:18.789 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.046 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.046 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.046 CC lib/ftl/utils/ftl_conf.o 00:02:19.046 CC lib/ftl/utils/ftl_md.o 00:02:19.303 CC lib/ftl/utils/ftl_mempool.o 00:02:19.303 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.303 CC lib/ftl/utils/ftl_property.o 00:02:19.303 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.303 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.560 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.560 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.560 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.560 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.560 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.560 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.560 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.818 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.818 CC lib/ftl/base/ftl_base_dev.o 00:02:19.818 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.818 CC lib/ftl/ftl_trace.o 00:02:19.818 LIB libspdk_vhost.a 00:02:20.075 LIB libspdk_ftl.a 00:02:20.075 LIB libspdk_nvmf.a 00:02:20.332 CC module/env_dpdk/env_dpdk_rpc.o 00:02:20.332 CC module/scheduler/gscheduler/gscheduler.o 00:02:20.332 CC module/blob/bdev/blob_bdev.o 00:02:20.332 CC module/sock/posix/posix.o 00:02:20.332 CC module/accel/ioat/accel_ioat.o 00:02:20.332 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.332 CC module/accel/error/accel_error.o 00:02:20.332 CC module/accel/iaa/accel_iaa.o 00:02:20.332 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:20.332 CC module/accel/dsa/accel_dsa.o 00:02:20.590 LIB libspdk_env_dpdk_rpc.a 00:02:20.590 CC module/accel/error/accel_error_rpc.o 00:02:20.590 LIB libspdk_scheduler_gscheduler.a 00:02:20.590 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.590 CC module/accel/dsa/accel_dsa_rpc.o 00:02:20.590 CC module/accel/ioat/accel_ioat_rpc.o 00:02:20.590 CC module/accel/iaa/accel_iaa_rpc.o 00:02:20.590 LIB libspdk_scheduler_dynamic.a 00:02:20.846 LIB libspdk_accel_error.a 00:02:20.846 LIB libspdk_blob_bdev.a 00:02:20.846 LIB libspdk_accel_dsa.a 00:02:20.846 LIB libspdk_accel_ioat.a 00:02:20.846 LIB libspdk_accel_iaa.a 00:02:20.846 CC module/bdev/gpt/gpt.o 00:02:20.846 CC module/bdev/delay/vbdev_delay.o 00:02:20.846 CC module/bdev/nvme/bdev_nvme.o 00:02:20.846 CC module/bdev/malloc/bdev_malloc.o 00:02:20.846 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.846 CC module/bdev/error/vbdev_error.o 00:02:20.846 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.846 CC module/bdev/null/bdev_null.o 00:02:20.846 CC module/bdev/passthru/vbdev_passthru.o 00:02:21.103 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:21.103 CC module/bdev/gpt/vbdev_gpt.o 00:02:21.361 CC module/bdev/null/bdev_null_rpc.o 00:02:21.361 CC module/bdev/error/vbdev_error_rpc.o 00:02:21.361 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:21.361 LIB libspdk_blobfs_bdev.a 00:02:21.361 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:21.361 LIB libspdk_sock_posix.a 00:02:21.361 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:21.361 CC module/bdev/raid/bdev_raid.o 00:02:21.361 LIB libspdk_bdev_error.a 00:02:21.619 LIB libspdk_bdev_null.a 00:02:21.619 LIB libspdk_bdev_gpt.a 00:02:21.619 LIB libspdk_bdev_passthru.a 00:02:21.619 CC module/bdev/split/vbdev_split.o 00:02:21.619 CC module/bdev/split/vbdev_split_rpc.o 00:02:21.619 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:21.619 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:21.619 LIB libspdk_bdev_delay.a 00:02:21.619 LIB libspdk_bdev_malloc.a 00:02:21.620 CC module/bdev/aio/bdev_aio.o 00:02:21.620 CC module/bdev/ftl/bdev_ftl.o 00:02:21.620 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:21.620 CC module/bdev/iscsi/bdev_iscsi.o 00:02:21.878 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:21.878 LIB libspdk_bdev_split.a 00:02:21.878 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:21.878 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:21.878 LIB libspdk_bdev_lvol.a 00:02:21.878 LIB libspdk_bdev_ftl.a 00:02:21.878 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:21.878 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.137 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.137 CC module/bdev/nvme/nvme_rpc.o 00:02:22.137 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.137 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.137 CC module/bdev/nvme/vbdev_opal.o 00:02:22.137 LIB libspdk_bdev_zone_block.a 00:02:22.137 LIB libspdk_bdev_aio.a 00:02:22.137 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.137 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.396 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.396 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.396 LIB libspdk_bdev_iscsi.a 00:02:22.396 CC module/bdev/raid/raid0.o 00:02:22.396 LIB libspdk_bdev_virtio.a 00:02:22.396 CC module/bdev/raid/raid1.o 00:02:22.396 CC module/bdev/raid/concat.o 00:02:22.396 CC module/bdev/raid/raid5f.o 00:02:23.332 LIB libspdk_bdev_raid.a 00:02:23.898 LIB libspdk_bdev_nvme.a 00:02:24.157 CC module/event/subsystems/vmd/vmd.o 00:02:24.157 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.157 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.157 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.157 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.157 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.157 CC module/event/subsystems/sock/sock.o 00:02:24.157 LIB libspdk_event_sock.a 00:02:24.157 LIB libspdk_event_scheduler.a 00:02:24.415 LIB libspdk_event_vmd.a 00:02:24.415 LIB libspdk_event_vhost_blk.a 00:02:24.415 LIB libspdk_event_iobuf.a 00:02:24.415 CC module/event/subsystems/accel/accel.o 00:02:24.674 LIB libspdk_event_accel.a 00:02:24.931 CC module/event/subsystems/bdev/bdev.o 00:02:25.190 LIB libspdk_event_bdev.a 00:02:25.190 CC module/event/subsystems/ublk/ublk.o 00:02:25.190 CC module/event/subsystems/scsi/scsi.o 00:02:25.190 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:25.190 CC module/event/subsystems/nbd/nbd.o 00:02:25.190 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:25.448 LIB libspdk_event_nbd.a 00:02:25.448 LIB libspdk_event_ublk.a 00:02:25.448 LIB libspdk_event_scsi.a 00:02:25.448 LIB libspdk_event_nvmf.a 00:02:25.706 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.706 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.706 LIB libspdk_event_vhost_scsi.a 00:02:25.706 LIB libspdk_event_iscsi.a 00:02:25.965 CXX app/trace/trace.o 00:02:25.965 CC app/trace_record/trace_record.o 00:02:25.965 TEST_HEADER include/spdk/accel.h 00:02:25.965 TEST_HEADER include/spdk/accel_module.h 00:02:25.965 TEST_HEADER include/spdk/assert.h 00:02:25.965 TEST_HEADER include/spdk/barrier.h 00:02:25.965 TEST_HEADER include/spdk/base64.h 00:02:25.965 TEST_HEADER include/spdk/bdev.h 00:02:25.965 TEST_HEADER include/spdk/bdev_module.h 00:02:25.965 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.965 TEST_HEADER include/spdk/bit_array.h 00:02:25.965 TEST_HEADER include/spdk/bit_pool.h 00:02:25.965 TEST_HEADER include/spdk/blob.h 00:02:25.965 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.965 TEST_HEADER include/spdk/blobfs.h 00:02:25.965 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.965 TEST_HEADER include/spdk/conf.h 00:02:25.965 TEST_HEADER include/spdk/config.h 00:02:25.965 TEST_HEADER include/spdk/cpuset.h 00:02:25.965 TEST_HEADER include/spdk/crc16.h 00:02:25.965 TEST_HEADER include/spdk/crc32.h 00:02:25.965 TEST_HEADER include/spdk/crc64.h 00:02:25.965 TEST_HEADER include/spdk/dif.h 00:02:25.965 CC app/nvmf_tgt/nvmf_main.o 00:02:25.965 TEST_HEADER include/spdk/dma.h 00:02:25.965 TEST_HEADER include/spdk/endian.h 00:02:25.965 TEST_HEADER include/spdk/env.h 00:02:25.965 CC examples/accel/perf/accel_perf.o 00:02:25.965 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.965 TEST_HEADER include/spdk/event.h 00:02:25.965 TEST_HEADER include/spdk/fd.h 00:02:25.965 TEST_HEADER include/spdk/fd_group.h 00:02:25.965 TEST_HEADER include/spdk/file.h 00:02:25.965 TEST_HEADER include/spdk/ftl.h 00:02:25.965 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.965 TEST_HEADER include/spdk/hexlify.h 00:02:25.965 CC test/bdev/bdevio/bdevio.o 00:02:25.965 TEST_HEADER include/spdk/histogram_data.h 00:02:25.965 TEST_HEADER include/spdk/idxd.h 00:02:25.965 CC test/dma/test_dma/test_dma.o 00:02:25.965 CC test/blobfs/mkfs/mkfs.o 00:02:25.965 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.965 TEST_HEADER include/spdk/init.h 00:02:25.965 TEST_HEADER include/spdk/ioat.h 00:02:25.965 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.965 CC test/accel/dif/dif.o 00:02:25.965 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.965 CC test/app/bdev_svc/bdev_svc.o 00:02:25.965 TEST_HEADER include/spdk/json.h 00:02:25.965 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.965 TEST_HEADER include/spdk/likely.h 00:02:25.965 TEST_HEADER include/spdk/log.h 00:02:25.965 TEST_HEADER include/spdk/lvol.h 00:02:25.965 TEST_HEADER include/spdk/memory.h 00:02:26.224 TEST_HEADER include/spdk/mmio.h 00:02:26.224 TEST_HEADER include/spdk/nbd.h 00:02:26.224 TEST_HEADER include/spdk/notify.h 00:02:26.224 TEST_HEADER include/spdk/nvme.h 00:02:26.224 TEST_HEADER include/spdk/nvme_intel.h 00:02:26.224 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:26.224 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:26.224 TEST_HEADER include/spdk/nvme_spec.h 00:02:26.224 TEST_HEADER include/spdk/nvme_zns.h 00:02:26.224 TEST_HEADER include/spdk/nvmf.h 00:02:26.224 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:26.224 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:26.224 TEST_HEADER include/spdk/nvmf_spec.h 00:02:26.224 TEST_HEADER include/spdk/nvmf_transport.h 00:02:26.224 TEST_HEADER include/spdk/opal.h 00:02:26.224 TEST_HEADER include/spdk/opal_spec.h 00:02:26.224 TEST_HEADER include/spdk/pci_ids.h 00:02:26.224 TEST_HEADER include/spdk/pipe.h 00:02:26.224 TEST_HEADER include/spdk/queue.h 00:02:26.224 TEST_HEADER include/spdk/reduce.h 00:02:26.224 TEST_HEADER include/spdk/rpc.h 00:02:26.224 TEST_HEADER include/spdk/scheduler.h 00:02:26.224 TEST_HEADER include/spdk/scsi.h 00:02:26.224 TEST_HEADER include/spdk/scsi_spec.h 00:02:26.224 TEST_HEADER include/spdk/sock.h 00:02:26.224 TEST_HEADER include/spdk/stdinc.h 00:02:26.224 TEST_HEADER include/spdk/string.h 00:02:26.224 TEST_HEADER include/spdk/thread.h 00:02:26.224 TEST_HEADER include/spdk/trace.h 00:02:26.224 TEST_HEADER include/spdk/trace_parser.h 00:02:26.224 TEST_HEADER include/spdk/tree.h 00:02:26.224 TEST_HEADER include/spdk/ublk.h 00:02:26.224 TEST_HEADER include/spdk/util.h 00:02:26.224 TEST_HEADER include/spdk/uuid.h 00:02:26.224 TEST_HEADER include/spdk/version.h 00:02:26.224 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:26.224 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:26.224 TEST_HEADER include/spdk/vhost.h 00:02:26.224 TEST_HEADER include/spdk/vmd.h 00:02:26.224 TEST_HEADER include/spdk/xor.h 00:02:26.224 TEST_HEADER include/spdk/zipf.h 00:02:26.224 CXX test/cpp_headers/accel.o 00:02:26.224 LINK nvmf_tgt 00:02:26.224 LINK bdev_svc 00:02:26.224 LINK mkfs 00:02:26.224 LINK spdk_trace_record 00:02:26.482 CXX test/cpp_headers/accel_module.o 00:02:26.482 LINK spdk_trace 00:02:26.482 LINK test_dma 00:02:26.482 LINK bdevio 00:02:26.482 CXX test/cpp_headers/assert.o 00:02:26.741 LINK dif 00:02:26.741 LINK accel_perf 00:02:26.741 CXX test/cpp_headers/barrier.o 00:02:26.741 CC test/app/histogram_perf/histogram_perf.o 00:02:26.741 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.999 CXX test/cpp_headers/base64.o 00:02:26.999 LINK histogram_perf 00:02:27.258 CXX test/cpp_headers/bdev.o 00:02:27.258 CXX test/cpp_headers/bdev_module.o 00:02:27.258 LINK nvme_fuzz 00:02:27.523 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.523 CXX test/cpp_headers/bdev_zone.o 00:02:27.523 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.836 LINK hello_bdev 00:02:27.836 CXX test/cpp_headers/bit_array.o 00:02:28.121 CXX test/cpp_headers/bit_pool.o 00:02:28.121 CC examples/blob/hello_world/hello_blob.o 00:02:28.121 CXX test/cpp_headers/blob.o 00:02:28.121 CC examples/blob/cli/blobcli.o 00:02:28.385 CC app/iscsi_tgt/iscsi_tgt.o 00:02:28.385 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.385 CC test/app/jsoncat/jsoncat.o 00:02:28.385 CC test/env/mem_callbacks/mem_callbacks.o 00:02:28.385 CXX test/cpp_headers/blob_bdev.o 00:02:28.385 LINK hello_blob 00:02:28.385 CC examples/ioat/perf/perf.o 00:02:28.385 LINK jsoncat 00:02:28.385 LINK iscsi_tgt 00:02:28.643 CXX test/cpp_headers/blobfs.o 00:02:28.643 LINK bdevperf 00:02:28.643 LINK ioat_perf 00:02:28.902 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.902 LINK blobcli 00:02:28.902 CC examples/nvme/hello_world/hello_world.o 00:02:28.902 CXX test/cpp_headers/conf.o 00:02:29.161 LINK mem_callbacks 00:02:29.161 CC examples/ioat/verify/verify.o 00:02:29.161 CXX test/cpp_headers/config.o 00:02:29.161 CXX test/cpp_headers/cpuset.o 00:02:29.161 LINK hello_world 00:02:29.420 CC test/env/vtophys/vtophys.o 00:02:29.420 CXX test/cpp_headers/crc16.o 00:02:29.420 LINK verify 00:02:29.420 CC examples/nvme/reconnect/reconnect.o 00:02:29.679 LINK vtophys 00:02:29.679 CXX test/cpp_headers/crc32.o 00:02:29.938 CXX test/cpp_headers/crc64.o 00:02:29.938 LINK reconnect 00:02:29.938 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:29.938 CXX test/cpp_headers/dif.o 00:02:30.197 CC test/event/event_perf/event_perf.o 00:02:30.197 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.197 CC examples/sock/hello_world/hello_sock.o 00:02:30.197 CXX test/cpp_headers/dma.o 00:02:30.456 LINK event_perf 00:02:30.456 LINK env_dpdk_post_init 00:02:30.456 CXX test/cpp_headers/endian.o 00:02:30.456 CC app/spdk_tgt/spdk_tgt.o 00:02:30.456 LINK hello_sock 00:02:30.456 CC test/lvol/esnap/esnap.o 00:02:30.714 CXX test/cpp_headers/env.o 00:02:30.715 CC test/nvme/aer/aer.o 00:02:30.715 LINK nvme_manage 00:02:30.715 LINK iscsi_fuzz 00:02:30.715 LINK spdk_tgt 00:02:30.715 CC test/rpc_client/rpc_client_test.o 00:02:30.715 CXX test/cpp_headers/env_dpdk.o 00:02:30.973 CXX test/cpp_headers/event.o 00:02:30.973 CC test/event/reactor/reactor.o 00:02:30.973 LINK aer 00:02:30.973 LINK rpc_client_test 00:02:31.232 CC test/thread/poller_perf/poller_perf.o 00:02:31.232 CC test/env/memory/memory_ut.o 00:02:31.491 LINK reactor 00:02:31.491 CXX test/cpp_headers/fd.o 00:02:31.491 LINK poller_perf 00:02:31.491 CC examples/nvme/arbitration/arbitration.o 00:02:31.491 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.491 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:31.751 CXX test/cpp_headers/fd_group.o 00:02:31.751 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.751 LINK lsvmd 00:02:31.751 CXX test/cpp_headers/file.o 00:02:31.751 CC test/nvme/reset/reset.o 00:02:32.010 LINK arbitration 00:02:32.010 CXX test/cpp_headers/ftl.o 00:02:32.010 CC test/event/reactor_perf/reactor_perf.o 00:02:32.268 CC test/thread/lock/spdk_lock.o 00:02:32.268 LINK reset 00:02:32.268 LINK reactor_perf 00:02:32.268 CXX test/cpp_headers/gpt_spec.o 00:02:32.268 LINK vhost_fuzz 00:02:32.268 LINK memory_ut 00:02:32.527 CXX test/cpp_headers/hexlify.o 00:02:32.785 CXX test/cpp_headers/histogram_data.o 00:02:32.785 CC test/env/pci/pci_ut.o 00:02:32.785 CC examples/vmd/led/led.o 00:02:32.785 CC examples/nvme/hotplug/hotplug.o 00:02:32.785 CXX test/cpp_headers/idxd.o 00:02:33.043 CC test/event/app_repeat/app_repeat.o 00:02:33.043 LINK led 00:02:33.043 CC test/app/stub/stub.o 00:02:33.043 CXX test/cpp_headers/idxd_spec.o 00:02:33.044 CC test/nvme/sgl/sgl.o 00:02:33.044 LINK hotplug 00:02:33.044 LINK app_repeat 00:02:33.044 CC app/spdk_lspci/spdk_lspci.o 00:02:33.301 LINK pci_ut 00:02:33.301 LINK stub 00:02:33.301 CXX test/cpp_headers/init.o 00:02:33.301 LINK spdk_lspci 00:02:33.301 LINK sgl 00:02:33.559 CXX test/cpp_headers/ioat.o 00:02:33.559 CXX test/cpp_headers/ioat_spec.o 00:02:33.559 CC test/nvme/e2edp/nvme_dp.o 00:02:33.818 CC test/nvme/overhead/overhead.o 00:02:33.818 CXX test/cpp_headers/iscsi_spec.o 00:02:34.076 CXX test/cpp_headers/json.o 00:02:34.076 CC test/event/scheduler/scheduler.o 00:02:34.076 LINK nvme_dp 00:02:34.076 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:34.076 CC examples/nvme/abort/abort.o 00:02:34.076 CC app/spdk_nvme_perf/perf.o 00:02:34.335 CC test/nvme/err_injection/err_injection.o 00:02:34.593 LINK cmb_copy 00:02:34.594 CXX test/cpp_headers/jsonrpc.o 00:02:34.594 LINK overhead 00:02:34.594 LINK scheduler 00:02:34.594 LINK err_injection 00:02:34.594 CXX test/cpp_headers/likely.o 00:02:34.851 LINK spdk_lock 00:02:34.851 LINK abort 00:02:34.851 CC test/nvme/startup/startup.o 00:02:35.415 LINK startup 00:02:35.415 CXX test/cpp_headers/log.o 00:02:35.415 CC test/nvme/reserve/reserve.o 00:02:35.415 CXX test/cpp_headers/lvol.o 00:02:35.415 CC test/nvme/simple_copy/simple_copy.o 00:02:35.415 LINK spdk_nvme_perf 00:02:35.415 CC app/spdk_nvme_identify/identify.o 00:02:35.672 CXX test/cpp_headers/memory.o 00:02:35.672 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.672 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:35.930 LINK simple_copy 00:02:35.930 LINK reserve 00:02:35.930 CXX test/cpp_headers/mmio.o 00:02:35.930 LINK spdk_nvme_discover 00:02:35.930 LINK pmr_persistence 00:02:35.930 CXX test/cpp_headers/nbd.o 00:02:36.187 CXX test/cpp_headers/notify.o 00:02:36.187 CXX test/cpp_headers/nvme.o 00:02:36.187 CC test/nvme/connect_stress/connect_stress.o 00:02:36.444 CXX test/cpp_headers/nvme_intel.o 00:02:36.444 CC test/nvme/boot_partition/boot_partition.o 00:02:36.444 CC app/spdk_top/spdk_top.o 00:02:36.444 CXX test/cpp_headers/nvme_ocssd.o 00:02:36.444 LINK connect_stress 00:02:36.444 LINK boot_partition 00:02:36.700 LINK spdk_nvme_identify 00:02:36.700 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:36.700 CXX test/cpp_headers/nvme_spec.o 00:02:36.700 CXX test/cpp_headers/nvme_zns.o 00:02:36.958 CC examples/nvmf/nvmf/nvmf.o 00:02:36.958 CXX test/cpp_headers/nvmf.o 00:02:36.958 CC test/nvme/compliance/nvme_compliance.o 00:02:36.958 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:02:36.958 CC test/nvme/fused_ordering/fused_ordering.o 00:02:36.958 CXX test/cpp_headers/nvmf_cmd.o 00:02:37.215 LINK histogram_ut 00:02:37.216 LINK nvmf 00:02:37.216 LINK fused_ordering 00:02:37.216 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:37.216 LINK esnap 00:02:37.216 CXX test/cpp_headers/nvmf_spec.o 00:02:37.474 CXX test/cpp_headers/nvmf_transport.o 00:02:37.474 LINK nvme_compliance 00:02:37.474 CC test/unit/lib/accel/accel.c/accel_ut.o 00:02:37.474 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:02:37.474 LINK spdk_top 00:02:37.474 CC test/unit/lib/bdev/part.c/part_ut.o 00:02:37.474 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:02:37.474 CXX test/cpp_headers/opal.o 00:02:37.731 CXX test/cpp_headers/opal_spec.o 00:02:37.731 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.987 LINK scsi_nvme_ut 00:02:37.987 CXX test/cpp_headers/pci_ids.o 00:02:37.987 LINK doorbell_aers 00:02:37.987 CXX test/cpp_headers/pipe.o 00:02:37.987 CC app/vhost/vhost.o 00:02:37.987 CC app/spdk_dd/spdk_dd.o 00:02:38.245 CXX test/cpp_headers/queue.o 00:02:38.245 CC test/nvme/fdp/fdp.o 00:02:38.245 CXX test/cpp_headers/reduce.o 00:02:38.245 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:02:38.245 LINK vhost 00:02:38.503 CXX test/cpp_headers/rpc.o 00:02:38.503 LINK spdk_dd 00:02:38.760 LINK fdp 00:02:38.760 CXX test/cpp_headers/scheduler.o 00:02:38.760 LINK gpt_ut 00:02:38.760 CXX test/cpp_headers/scsi.o 00:02:38.760 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:02:39.018 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:02:39.018 CXX test/cpp_headers/scsi_spec.o 00:02:39.276 CC examples/util/zipf/zipf.o 00:02:39.276 CXX test/cpp_headers/sock.o 00:02:39.276 LINK zipf 00:02:39.534 CXX test/cpp_headers/stdinc.o 00:02:39.534 CC test/nvme/cuse/cuse.o 00:02:39.534 CXX test/cpp_headers/string.o 00:02:39.790 CXX test/cpp_headers/thread.o 00:02:39.790 CC examples/thread/thread/thread_ex.o 00:02:40.046 CXX test/cpp_headers/trace.o 00:02:40.046 LINK thread 00:02:40.046 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:02:40.302 CXX test/cpp_headers/trace_parser.o 00:02:40.302 LINK vbdev_lvol_ut 00:02:40.302 CC app/fio/nvme/fio_plugin.o 00:02:40.302 CXX test/cpp_headers/tree.o 00:02:40.302 LINK accel_ut 00:02:40.302 CXX test/cpp_headers/ublk.o 00:02:40.559 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:02:40.559 CXX test/cpp_headers/util.o 00:02:40.816 CXX test/cpp_headers/uuid.o 00:02:40.816 LINK cuse 00:02:40.816 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:02:40.816 LINK bdev_zone_ut 00:02:40.816 CXX test/cpp_headers/version.o 00:02:40.816 CXX test/cpp_headers/vfio_user_pci.o 00:02:41.074 LINK spdk_nvme 00:02:41.074 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:02:41.074 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:02:41.074 CXX test/cpp_headers/vfio_user_spec.o 00:02:41.331 CXX test/cpp_headers/vhost.o 00:02:41.331 LINK bdev_raid_sb_ut 00:02:41.588 CXX test/cpp_headers/vmd.o 00:02:41.588 LINK concat_ut 00:02:41.588 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:02:41.588 CXX test/cpp_headers/xor.o 00:02:41.588 LINK raid1_ut 00:02:41.845 CC app/fio/bdev/fio_plugin.o 00:02:41.845 CXX test/cpp_headers/zipf.o 00:02:41.845 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:02:41.845 CC examples/idxd/perf/perf.o 00:02:42.103 LINK part_ut 00:02:42.103 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:02:42.103 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:02:42.360 LINK idxd_perf 00:02:42.360 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:42.360 LINK spdk_bdev 00:02:42.638 LINK interrupt_tgt 00:02:42.638 LINK bdev_raid_ut 00:02:42.897 LINK vbdev_zone_block_ut 00:02:42.897 LINK blob_bdev_ut 00:02:42.897 CC test/unit/lib/blob/blob.c/blob_ut.o 00:02:42.897 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:02:42.897 LINK raid5f_ut 00:02:43.155 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:02:43.155 LINK tree_ut 00:02:43.155 CC test/unit/lib/dma/dma.c/dma_ut.o 00:02:43.465 CC test/unit/lib/event/app.c/app_ut.o 00:02:43.465 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:02:43.730 LINK bdev_ut 00:02:43.730 LINK dma_ut 00:02:43.730 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:02:43.988 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:02:43.988 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:02:43.988 LINK blobfs_bdev_ut 00:02:43.988 LINK app_ut 00:02:44.246 LINK bdev_ut 00:02:44.246 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:02:44.246 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:02:44.503 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:02:44.503 CC test/unit/lib/iscsi/param.c/param_ut.o 00:02:44.503 LINK ioat_ut 00:02:44.761 LINK init_grp_ut 00:02:44.761 LINK blobfs_async_ut 00:02:44.761 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:02:45.018 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:02:45.018 LINK blobfs_sync_ut 00:02:45.018 LINK reactor_ut 00:02:45.018 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:02:45.275 LINK param_ut 00:02:45.275 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:02:45.533 CC test/unit/lib/log/log.c/log_ut.o 00:02:45.533 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:02:45.533 LINK conn_ut 00:02:45.790 LINK portal_grp_ut 00:02:45.790 LINK log_ut 00:02:45.790 LINK jsonrpc_server_ut 00:02:45.790 CC test/unit/lib/notify/notify.c/notify_ut.o 00:02:46.047 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:02:46.047 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:02:46.047 LINK tgt_node_ut 00:02:46.047 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:02:46.316 LINK notify_ut 00:02:46.316 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:02:46.575 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:02:46.833 LINK json_util_ut 00:02:47.091 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:02:47.351 LINK iscsi_ut 00:02:47.610 LINK json_write_ut 00:02:47.610 LINK bdev_nvme_ut 00:02:47.610 LINK dev_ut 00:02:47.868 LINK nvme_ut 00:02:47.868 CC test/unit/lib/sock/sock.c/sock_ut.o 00:02:47.868 CC test/unit/lib/thread/thread.c/thread_ut.o 00:02:47.868 LINK lvol_ut 00:02:47.868 LINK json_parse_ut 00:02:47.868 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:02:48.127 CC test/unit/lib/util/base64.c/base64_ut.o 00:02:48.127 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:02:48.127 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:02:48.127 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:02:48.385 LINK base64_ut 00:02:48.385 LINK crc16_ut 00:02:48.385 LINK cpuset_ut 00:02:48.644 LINK bit_array_ut 00:02:48.644 CC test/unit/lib/sock/posix.c/posix_ut.o 00:02:48.644 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:02:48.644 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:02:48.903 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:02:48.903 LINK lun_ut 00:02:48.903 LINK pci_event_ut 00:02:48.903 LINK crc32_ieee_ut 00:02:49.161 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:02:49.161 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:02:49.161 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:02:49.161 LINK subsystem_ut 00:02:49.420 LINK scsi_ut 00:02:49.420 LINK crc32c_ut 00:02:49.679 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:02:49.679 LINK sock_ut 00:02:49.679 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:02:49.679 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:02:49.679 LINK crc64_ut 00:02:49.937 LINK posix_ut 00:02:49.937 CC test/unit/lib/util/dif.c/dif_ut.o 00:02:49.937 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:02:49.937 LINK rpc_ut 00:02:50.195 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:02:50.195 LINK nvme_ctrlr_ut 00:02:50.195 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:02:50.452 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:02:50.452 LINK thread_ut 00:02:50.711 LINK idxd_user_ut 00:02:50.711 LINK scsi_bdev_ut 00:02:50.711 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:02:50.969 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:02:50.969 LINK nvme_ctrlr_cmd_ut 00:02:50.969 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:02:51.228 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:02:51.228 LINK dif_ut 00:02:51.487 CC test/unit/lib/util/iov.c/iov_ut.o 00:02:51.487 LINK scsi_pr_ut 00:02:51.745 LINK iobuf_ut 00:02:51.745 LINK tcp_ut 00:02:51.745 LINK idxd_ut 00:02:51.745 LINK iov_ut 00:02:52.003 CC test/unit/lib/util/math.c/math_ut.o 00:02:52.004 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:02:52.004 LINK blob_ut 00:02:52.004 CC test/unit/lib/util/string.c/string_ut.o 00:02:52.004 LINK math_ut 00:02:52.004 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:02:52.261 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:02:52.261 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:02:52.261 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:02:52.519 LINK nvme_ctrlr_ocssd_cmd_ut 00:02:52.519 LINK string_ut 00:02:52.519 LINK pipe_ut 00:02:52.778 LINK ctrlr_discovery_ut 00:02:52.778 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:02:52.778 CC test/unit/lib/util/xor.c/xor_ut.o 00:02:52.778 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:02:52.778 LINK subsystem_ut 00:02:53.038 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:02:53.038 LINK xor_ut 00:02:53.296 CC test/unit/lib/rdma/common.c/common_ut.o 00:02:53.296 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:02:53.296 LINK ctrlr_bdev_ut 00:02:53.555 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:02:53.555 LINK nvmf_ut 00:02:53.555 LINK ctrlr_ut 00:02:53.814 LINK nvme_ns_ut 00:02:53.814 LINK common_ut 00:02:53.814 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:02:54.072 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:02:54.072 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:02:54.072 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:02:54.331 LINK ftl_l2p_ut 00:02:54.588 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:02:54.846 LINK ftl_io_ut 00:02:54.846 LINK ftl_bitmap_ut 00:02:55.102 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:02:55.102 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:02:55.102 LINK nvme_poll_group_ut 00:02:55.102 LINK nvme_ns_ocssd_cmd_ut 00:02:55.102 LINK vhost_ut 00:02:55.367 LINK nvme_ns_cmd_ut 00:02:55.367 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:02:55.367 LINK ftl_mempool_ut 00:02:55.367 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:02:55.367 LINK ftl_band_ut 00:02:55.649 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:02:55.649 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:02:55.649 LINK nvme_pcie_ut 00:02:55.649 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:02:55.649 LINK ftl_mngt_ut 00:02:55.649 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:02:55.906 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:02:55.906 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:02:55.906 LINK transport_ut 00:02:56.470 LINK nvme_quirks_ut 00:02:56.470 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:02:56.470 LINK rdma_ut 00:02:56.470 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:02:57.035 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:02:57.035 LINK nvme_io_msg_ut 00:02:57.035 LINK ftl_layout_upgrade_ut 00:02:57.035 LINK ftl_sb_ut 00:02:57.035 LINK nvme_transport_ut 00:02:57.035 LINK nvme_qpair_ut 00:02:57.292 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:02:57.549 LINK nvme_opal_ut 00:02:57.549 LINK nvme_fabric_ut 00:02:57.856 LINK nvme_pcie_common_ut 00:02:58.788 LINK nvme_tcp_ut 00:02:59.045 LINK nvme_cuse_ut 00:02:59.611 LINK nvme_rdma_ut 00:02:59.611 00:02:59.611 real 1m57.911s 00:02:59.611 user 10m1.092s 00:02:59.611 sys 2m7.192s 00:02:59.611 ************************************ 00:02:59.611 END TEST unittest_build 00:02:59.611 ************************************ 00:02:59.611 21:25:20 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:59.611 21:25:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:59.869 21:25:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:02:59.869 21:25:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:02:59.869 21:25:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:02:59.869 21:25:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:02:59.869 21:25:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:02:59.869 21:25:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:02:59.869 21:25:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:02:59.869 21:25:20 -- scripts/common.sh@335 -- # IFS=.-: 00:02:59.869 21:25:20 -- scripts/common.sh@335 -- # read -ra ver1 00:02:59.869 21:25:20 -- scripts/common.sh@336 -- # IFS=.-: 00:02:59.869 21:25:20 -- scripts/common.sh@336 -- # read -ra ver2 00:02:59.869 21:25:20 -- scripts/common.sh@337 -- # local 'op=<' 00:02:59.869 21:25:20 -- scripts/common.sh@339 -- # ver1_l=2 00:02:59.869 21:25:20 -- scripts/common.sh@340 -- # ver2_l=1 00:02:59.869 21:25:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:02:59.869 21:25:20 -- scripts/common.sh@343 -- # case "$op" in 00:02:59.869 21:25:20 -- scripts/common.sh@344 -- # : 1 00:02:59.869 21:25:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:02:59.869 21:25:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:59.869 21:25:20 -- scripts/common.sh@364 -- # decimal 1 00:02:59.869 21:25:20 -- scripts/common.sh@352 -- # local d=1 00:02:59.869 21:25:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:59.869 21:25:20 -- scripts/common.sh@354 -- # echo 1 00:02:59.869 21:25:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:02:59.869 21:25:20 -- scripts/common.sh@365 -- # decimal 2 00:02:59.869 21:25:20 -- scripts/common.sh@352 -- # local d=2 00:02:59.869 21:25:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:59.869 21:25:20 -- scripts/common.sh@354 -- # echo 2 00:02:59.869 21:25:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:02:59.869 21:25:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:02:59.869 21:25:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:02:59.869 21:25:20 -- scripts/common.sh@367 -- # return 0 00:02:59.869 21:25:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:59.869 21:25:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:02:59.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.869 --rc genhtml_branch_coverage=1 00:02:59.869 --rc genhtml_function_coverage=1 00:02:59.869 --rc genhtml_legend=1 00:02:59.869 --rc geninfo_all_blocks=1 00:02:59.869 --rc geninfo_unexecuted_blocks=1 00:02:59.869 00:02:59.869 ' 00:02:59.869 21:25:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:02:59.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.869 --rc genhtml_branch_coverage=1 00:02:59.869 --rc genhtml_function_coverage=1 00:02:59.869 --rc genhtml_legend=1 00:02:59.869 --rc geninfo_all_blocks=1 00:02:59.869 --rc geninfo_unexecuted_blocks=1 00:02:59.869 00:02:59.869 ' 00:02:59.869 21:25:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:02:59.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.869 --rc genhtml_branch_coverage=1 00:02:59.869 --rc genhtml_function_coverage=1 00:02:59.869 --rc genhtml_legend=1 00:02:59.869 --rc geninfo_all_blocks=1 00:02:59.869 --rc geninfo_unexecuted_blocks=1 00:02:59.869 00:02:59.869 ' 00:02:59.869 21:25:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:02:59.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:59.869 --rc genhtml_branch_coverage=1 00:02:59.869 --rc genhtml_function_coverage=1 00:02:59.869 --rc genhtml_legend=1 00:02:59.869 --rc geninfo_all_blocks=1 00:02:59.869 --rc geninfo_unexecuted_blocks=1 00:02:59.869 00:02:59.869 ' 00:02:59.869 21:25:20 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:02:59.869 21:25:20 -- nvmf/common.sh@7 -- # uname -s 00:02:59.869 21:25:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:59.869 21:25:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:59.869 21:25:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:59.869 21:25:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:59.869 21:25:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:59.869 21:25:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:59.869 21:25:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:59.869 21:25:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:59.869 21:25:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:59.869 21:25:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:59.869 21:25:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:02:59.869 21:25:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:02:59.869 21:25:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:59.869 21:25:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:59.869 21:25:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:59.869 21:25:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:59.869 21:25:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:59.869 21:25:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:59.869 21:25:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:59.869 21:25:20 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.869 21:25:20 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.869 21:25:20 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.869 21:25:20 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.869 21:25:20 -- paths/export.sh@6 -- # export PATH 00:02:59.869 21:25:20 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:02:59.869 21:25:20 -- nvmf/common.sh@46 -- # : 0 00:02:59.869 21:25:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:59.869 21:25:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:59.869 21:25:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:59.869 21:25:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:59.869 21:25:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:59.869 21:25:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:59.869 21:25:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:59.869 21:25:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:59.869 21:25:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:59.869 21:25:20 -- spdk/autotest.sh@32 -- # uname -s 00:02:59.869 21:25:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:59.869 21:25:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:02:59.869 21:25:20 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.869 21:25:20 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:02:59.869 21:25:20 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:02:59.869 21:25:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:59.869 21:25:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:59.869 21:25:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:02:59.869 21:25:20 -- spdk/autotest.sh@48 -- # udevadm_pid=51383 00:02:59.869 21:25:20 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:02:59.869 21:25:20 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.869 21:25:20 -- spdk/autotest.sh@54 -- # echo 51387 00:02:59.869 21:25:20 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.869 21:25:20 -- spdk/autotest.sh@56 -- # echo 51394 00:02:59.869 21:25:20 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:02:59.869 21:25:20 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:02:59.869 21:25:20 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.869 21:25:20 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:59.869 21:25:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:59.869 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:02:59.869 21:25:20 -- spdk/autotest.sh@70 -- # create_test_list 00:02:59.869 21:25:20 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:59.869 21:25:20 -- common/autotest_common.sh@10 -- # set +x 00:03:00.128 21:25:20 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:00.128 21:25:20 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:00.128 21:25:20 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:00.128 21:25:20 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:00.128 21:25:20 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:00.128 21:25:20 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:00.128 21:25:20 -- common/autotest_common.sh@1450 -- # uname 00:03:00.128 21:25:20 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:00.128 21:25:20 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:00.128 21:25:20 -- common/autotest_common.sh@1470 -- # uname 00:03:00.128 21:25:20 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:00.128 21:25:20 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:00.128 21:25:20 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:00.128 lcov: LCOV version 1.15 00:03:00.128 21:25:20 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:15.002 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:15.002 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:15.002 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:15.002 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:15.002 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:15.002 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:01.673 21:26:15 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:04:01.673 21:26:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.673 21:26:15 -- common/autotest_common.sh@10 -- # set +x 00:04:01.673 21:26:15 -- spdk/autotest.sh@89 -- # rm -f 00:04:01.673 21:26:15 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.673 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:01.673 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:01.673 21:26:15 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:04:01.673 21:26:15 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:01.673 21:26:15 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:01.673 21:26:15 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:01.673 21:26:15 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:01.673 21:26:15 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:01.673 21:26:15 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:01.673 21:26:15 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.673 21:26:15 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:01.673 21:26:15 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:04:01.673 21:26:15 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:04:01.673 21:26:15 -- spdk/autotest.sh@108 -- # grep -v p 00:04:01.673 21:26:15 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:01.673 21:26:15 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:04:01.673 21:26:15 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:04:01.673 21:26:15 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:01.673 21:26:15 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:01.673 No valid GPT data, bailing 00:04:01.673 21:26:15 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:01.673 21:26:15 -- scripts/common.sh@393 -- # pt= 00:04:01.673 21:26:15 -- scripts/common.sh@394 -- # return 1 00:04:01.673 21:26:15 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:01.673 1+0 records in 00:04:01.673 1+0 records out 00:04:01.673 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00405332 s, 259 MB/s 00:04:01.673 21:26:15 -- spdk/autotest.sh@116 -- # sync 00:04:01.673 21:26:16 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:01.673 21:26:16 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:01.673 21:26:16 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:01.673 21:26:17 -- spdk/autotest.sh@122 -- # uname -s 00:04:01.673 21:26:17 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:04:01.673 21:26:17 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:01.673 21:26:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.674 21:26:17 -- common/autotest_common.sh@10 -- # set +x 00:04:01.674 ************************************ 00:04:01.674 START TEST setup.sh 00:04:01.674 ************************************ 00:04:01.674 21:26:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:01.674 * Looking for test storage... 00:04:01.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.674 21:26:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:01.674 21:26:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:01.674 21:26:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:01.674 21:26:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:01.674 21:26:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:01.674 21:26:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:01.674 21:26:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.674 21:26:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:01.674 21:26:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:01.674 21:26:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:01.674 21:26:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:01.674 21:26:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:01.674 21:26:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:01.674 21:26:17 -- scripts/common.sh@344 -- # : 1 00:04:01.674 21:26:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:01.674 21:26:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.674 21:26:17 -- scripts/common.sh@364 -- # decimal 1 00:04:01.674 21:26:17 -- scripts/common.sh@352 -- # local d=1 00:04:01.674 21:26:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.674 21:26:17 -- scripts/common.sh@354 -- # echo 1 00:04:01.674 21:26:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:01.674 21:26:17 -- scripts/common.sh@365 -- # decimal 2 00:04:01.674 21:26:17 -- scripts/common.sh@352 -- # local d=2 00:04:01.674 21:26:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.674 21:26:17 -- scripts/common.sh@354 -- # echo 2 00:04:01.674 21:26:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.674 21:26:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.674 21:26:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.674 21:26:17 -- scripts/common.sh@367 -- # return 0 00:04:01.674 21:26:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.674 --rc genhtml_branch_coverage=1 00:04:01.674 --rc genhtml_function_coverage=1 00:04:01.674 --rc genhtml_legend=1 00:04:01.674 --rc geninfo_all_blocks=1 00:04:01.674 --rc geninfo_unexecuted_blocks=1 00:04:01.674 00:04:01.674 ' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.674 --rc genhtml_branch_coverage=1 00:04:01.674 --rc genhtml_function_coverage=1 00:04:01.674 --rc genhtml_legend=1 00:04:01.674 --rc geninfo_all_blocks=1 00:04:01.674 --rc geninfo_unexecuted_blocks=1 00:04:01.674 00:04:01.674 ' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.674 --rc genhtml_branch_coverage=1 00:04:01.674 --rc genhtml_function_coverage=1 00:04:01.674 --rc genhtml_legend=1 00:04:01.674 --rc geninfo_all_blocks=1 00:04:01.674 --rc geninfo_unexecuted_blocks=1 00:04:01.674 00:04:01.674 ' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.674 --rc genhtml_branch_coverage=1 00:04:01.674 --rc genhtml_function_coverage=1 00:04:01.674 --rc genhtml_legend=1 00:04:01.674 --rc geninfo_all_blocks=1 00:04:01.674 --rc geninfo_unexecuted_blocks=1 00:04:01.674 00:04:01.674 ' 00:04:01.674 21:26:17 -- setup/test-setup.sh@10 -- # uname -s 00:04:01.674 21:26:17 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:01.674 21:26:17 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:01.674 21:26:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.674 21:26:17 -- common/autotest_common.sh@10 -- # set +x 00:04:01.674 ************************************ 00:04:01.674 START TEST acl 00:04:01.674 ************************************ 00:04:01.674 21:26:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:01.674 * Looking for test storage... 00:04:01.674 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.674 21:26:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:01.674 21:26:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:01.674 21:26:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:01.674 21:26:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:01.674 21:26:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:01.674 21:26:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:01.674 21:26:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.674 21:26:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:01.674 21:26:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:01.674 21:26:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:01.674 21:26:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:01.674 21:26:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:01.674 21:26:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:01.674 21:26:17 -- scripts/common.sh@344 -- # : 1 00:04:01.674 21:26:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:01.674 21:26:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.674 21:26:17 -- scripts/common.sh@364 -- # decimal 1 00:04:01.674 21:26:17 -- scripts/common.sh@352 -- # local d=1 00:04:01.674 21:26:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.674 21:26:17 -- scripts/common.sh@354 -- # echo 1 00:04:01.674 21:26:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:01.674 21:26:17 -- scripts/common.sh@365 -- # decimal 2 00:04:01.674 21:26:17 -- scripts/common.sh@352 -- # local d=2 00:04:01.674 21:26:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.674 21:26:17 -- scripts/common.sh@354 -- # echo 2 00:04:01.674 21:26:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.674 21:26:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.674 21:26:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.674 21:26:17 -- scripts/common.sh@367 -- # return 0 00:04:01.674 21:26:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.674 --rc genhtml_branch_coverage=1 00:04:01.674 --rc genhtml_function_coverage=1 00:04:01.674 --rc genhtml_legend=1 00:04:01.674 --rc geninfo_all_blocks=1 00:04:01.674 --rc geninfo_unexecuted_blocks=1 00:04:01.674 00:04:01.674 ' 00:04:01.674 21:26:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.675 --rc genhtml_branch_coverage=1 00:04:01.675 --rc genhtml_function_coverage=1 00:04:01.675 --rc genhtml_legend=1 00:04:01.675 --rc geninfo_all_blocks=1 00:04:01.675 --rc geninfo_unexecuted_blocks=1 00:04:01.675 00:04:01.675 ' 00:04:01.675 21:26:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.675 --rc genhtml_branch_coverage=1 00:04:01.675 --rc genhtml_function_coverage=1 00:04:01.675 --rc genhtml_legend=1 00:04:01.675 --rc geninfo_all_blocks=1 00:04:01.675 --rc geninfo_unexecuted_blocks=1 00:04:01.675 00:04:01.675 ' 00:04:01.675 21:26:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.675 --rc genhtml_branch_coverage=1 00:04:01.675 --rc genhtml_function_coverage=1 00:04:01.675 --rc genhtml_legend=1 00:04:01.675 --rc geninfo_all_blocks=1 00:04:01.675 --rc geninfo_unexecuted_blocks=1 00:04:01.675 00:04:01.675 ' 00:04:01.675 21:26:17 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:01.675 21:26:17 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:01.675 21:26:17 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:01.675 21:26:17 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:01.675 21:26:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:01.675 21:26:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:01.675 21:26:17 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:01.675 21:26:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:01.675 21:26:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:01.675 21:26:17 -- setup/acl.sh@12 -- # devs=() 00:04:01.675 21:26:17 -- setup/acl.sh@12 -- # declare -a devs 00:04:01.675 21:26:17 -- setup/acl.sh@13 -- # drivers=() 00:04:01.675 21:26:17 -- setup/acl.sh@13 -- # declare -A drivers 00:04:01.675 21:26:17 -- setup/acl.sh@51 -- # setup reset 00:04:01.675 21:26:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.675 21:26:17 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.675 21:26:18 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:01.675 21:26:18 -- setup/acl.sh@16 -- # local dev driver 00:04:01.675 21:26:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.675 21:26:18 -- setup/acl.sh@15 -- # setup output status 00:04:01.675 21:26:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.675 21:26:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:01.675 Hugepages 00:04:01.675 node hugesize free / total 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # continue 00:04:01.675 21:26:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.675 00:04:01.675 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # continue 00:04:01.675 21:26:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:01.675 21:26:18 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:01.675 21:26:18 -- setup/acl.sh@20 -- # continue 00:04:01.675 21:26:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.675 21:26:18 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:01.675 21:26:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:01.675 21:26:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:01.675 21:26:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:01.675 21:26:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:01.675 21:26:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.675 21:26:18 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:01.675 21:26:18 -- setup/acl.sh@54 -- # run_test denied denied 00:04:01.675 21:26:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.675 21:26:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.675 21:26:18 -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 ************************************ 00:04:01.675 START TEST denied 00:04:01.675 ************************************ 00:04:01.675 21:26:18 -- common/autotest_common.sh@1114 -- # denied 00:04:01.675 21:26:18 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:01.675 21:26:18 -- setup/acl.sh@38 -- # setup output config 00:04:01.675 21:26:18 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:01.675 21:26:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.675 21:26:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.675 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:01.675 21:26:19 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:01.675 21:26:19 -- setup/acl.sh@28 -- # local dev driver 00:04:01.675 21:26:19 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:01.675 21:26:19 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:01.675 21:26:19 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:01.675 21:26:19 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:01.675 21:26:19 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:01.675 21:26:19 -- setup/acl.sh@41 -- # setup reset 00:04:01.675 21:26:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.675 21:26:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.675 00:04:01.675 real 0m1.453s 00:04:01.675 user 0m0.377s 00:04:01.675 sys 0m1.139s 00:04:01.675 21:26:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.675 ************************************ 00:04:01.675 END TEST denied 00:04:01.675 ************************************ 00:04:01.675 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 21:26:20 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:01.675 21:26:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.675 21:26:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.675 21:26:20 -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 ************************************ 00:04:01.675 START TEST allowed 00:04:01.675 ************************************ 00:04:01.675 21:26:20 -- common/autotest_common.sh@1114 -- # allowed 00:04:01.675 21:26:20 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:01.675 21:26:20 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:01.675 21:26:20 -- setup/acl.sh@45 -- # setup output config 00:04:01.675 21:26:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.675 21:26:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.675 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.675 21:26:21 -- setup/acl.sh@47 -- # verify 00:04:01.675 21:26:21 -- setup/acl.sh@28 -- # local dev driver 00:04:01.675 21:26:21 -- setup/acl.sh@48 -- # setup reset 00:04:01.675 21:26:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.675 21:26:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.675 00:04:01.675 real 0m1.528s 00:04:01.675 user 0m0.332s 00:04:01.675 sys 0m1.242s 00:04:01.675 21:26:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.675 ************************************ 00:04:01.675 END TEST allowed 00:04:01.675 ************************************ 00:04:01.675 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:04:01.675 ************************************ 00:04:01.675 END TEST acl 00:04:01.675 ************************************ 00:04:01.675 00:04:01.675 real 0m4.016s 00:04:01.675 user 0m1.152s 00:04:01.675 sys 0m3.033s 00:04:01.676 21:26:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:01.676 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:04:01.676 21:26:21 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.676 21:26:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.676 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:04:01.676 ************************************ 00:04:01.676 START TEST hugepages 00:04:01.676 ************************************ 00:04:01.676 21:26:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:01.676 * Looking for test storage... 00:04:01.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:01.676 21:26:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:01.676 21:26:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:01.676 21:26:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:01.676 21:26:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:01.676 21:26:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:01.676 21:26:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:01.676 21:26:21 -- scripts/common.sh@335 -- # IFS=.-: 00:04:01.676 21:26:21 -- scripts/common.sh@335 -- # read -ra ver1 00:04:01.676 21:26:21 -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.676 21:26:21 -- scripts/common.sh@336 -- # read -ra ver2 00:04:01.676 21:26:21 -- scripts/common.sh@337 -- # local 'op=<' 00:04:01.676 21:26:21 -- scripts/common.sh@339 -- # ver1_l=2 00:04:01.676 21:26:21 -- scripts/common.sh@340 -- # ver2_l=1 00:04:01.676 21:26:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:01.676 21:26:21 -- scripts/common.sh@343 -- # case "$op" in 00:04:01.676 21:26:21 -- scripts/common.sh@344 -- # : 1 00:04:01.676 21:26:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:01.676 21:26:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.676 21:26:21 -- scripts/common.sh@364 -- # decimal 1 00:04:01.676 21:26:21 -- scripts/common.sh@352 -- # local d=1 00:04:01.676 21:26:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.676 21:26:21 -- scripts/common.sh@354 -- # echo 1 00:04:01.676 21:26:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:01.676 21:26:21 -- scripts/common.sh@365 -- # decimal 2 00:04:01.676 21:26:21 -- scripts/common.sh@352 -- # local d=2 00:04:01.676 21:26:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.676 21:26:21 -- scripts/common.sh@354 -- # echo 2 00:04:01.676 21:26:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:01.676 21:26:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:01.676 21:26:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:01.676 21:26:21 -- scripts/common.sh@367 -- # return 0 00:04:01.676 21:26:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:01.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.676 --rc genhtml_branch_coverage=1 00:04:01.676 --rc genhtml_function_coverage=1 00:04:01.676 --rc genhtml_legend=1 00:04:01.676 --rc geninfo_all_blocks=1 00:04:01.676 --rc geninfo_unexecuted_blocks=1 00:04:01.676 00:04:01.676 ' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:01.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.676 --rc genhtml_branch_coverage=1 00:04:01.676 --rc genhtml_function_coverage=1 00:04:01.676 --rc genhtml_legend=1 00:04:01.676 --rc geninfo_all_blocks=1 00:04:01.676 --rc geninfo_unexecuted_blocks=1 00:04:01.676 00:04:01.676 ' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:01.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.676 --rc genhtml_branch_coverage=1 00:04:01.676 --rc genhtml_function_coverage=1 00:04:01.676 --rc genhtml_legend=1 00:04:01.676 --rc geninfo_all_blocks=1 00:04:01.676 --rc geninfo_unexecuted_blocks=1 00:04:01.676 00:04:01.676 ' 00:04:01.676 21:26:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:01.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.676 --rc genhtml_branch_coverage=1 00:04:01.676 --rc genhtml_function_coverage=1 00:04:01.676 --rc genhtml_legend=1 00:04:01.676 --rc geninfo_all_blocks=1 00:04:01.676 --rc geninfo_unexecuted_blocks=1 00:04:01.676 00:04:01.676 ' 00:04:01.676 21:26:21 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.676 21:26:21 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.676 21:26:21 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.676 21:26:21 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.676 21:26:21 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.676 21:26:21 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.676 21:26:21 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.676 21:26:21 -- setup/common.sh@18 -- # local node= 00:04:01.676 21:26:21 -- setup/common.sh@19 -- # local var val 00:04:01.676 21:26:21 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.676 21:26:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.676 21:26:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.676 21:26:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.676 21:26:21 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.676 21:26:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 2945008 kB' 'MemAvailable: 7326812 kB' 'Buffers: 35104 kB' 'Cached: 4498788 kB' 'SwapCached: 0 kB' 'Active: 411008 kB' 'Inactive: 4235600 kB' 'Active(anon): 124064 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235600 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 141632 kB' 'Mapped: 58252 kB' 'Shmem: 2600 kB' 'KReclaimable: 180812 kB' 'Slab: 261084 kB' 'SReclaimable: 180812 kB' 'SUnreclaim: 80272 kB' 'KernelStack: 5000 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4026008 kB' 'Committed_AS: 374496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.676 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.676 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.677 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.677 21:26:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # continue 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.678 21:26:21 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.678 21:26:21 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.678 21:26:21 -- setup/common.sh@33 -- # echo 2048 00:04:01.678 21:26:21 -- setup/common.sh@33 -- # return 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.678 21:26:21 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.678 21:26:21 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.678 21:26:21 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.678 21:26:21 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.678 21:26:21 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.678 21:26:21 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.678 21:26:21 -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.678 21:26:21 -- setup/hugepages.sh@27 -- # local node 00:04:01.678 21:26:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.678 21:26:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.678 21:26:21 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.678 21:26:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.678 21:26:21 -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.678 21:26:21 -- setup/hugepages.sh@37 -- # local node hp 00:04:01.678 21:26:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.678 21:26:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.678 21:26:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.678 21:26:21 -- setup/hugepages.sh@41 -- # echo 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.678 21:26:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.678 21:26:21 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.678 21:26:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.678 21:26:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.678 21:26:21 -- common/autotest_common.sh@10 -- # set +x 00:04:01.678 ************************************ 00:04:01.678 START TEST default_setup 00:04:01.678 ************************************ 00:04:01.678 21:26:21 -- common/autotest_common.sh@1114 -- # default_setup 00:04:01.678 21:26:21 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.678 21:26:21 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.678 21:26:21 -- setup/hugepages.sh@51 -- # shift 00:04:01.678 21:26:21 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.678 21:26:21 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.678 21:26:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.678 21:26:21 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.678 21:26:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.678 21:26:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.678 21:26:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.678 21:26:21 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.678 21:26:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.678 21:26:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.678 21:26:21 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.678 21:26:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.678 21:26:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.678 21:26:21 -- setup/hugepages.sh@73 -- # return 0 00:04:01.678 21:26:21 -- setup/hugepages.sh@137 -- # setup output 00:04:01.678 21:26:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.678 21:26:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.937 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:02.195 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:02.456 21:26:22 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:02.456 21:26:22 -- setup/hugepages.sh@89 -- # local node 00:04:02.456 21:26:22 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.456 21:26:22 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.456 21:26:22 -- setup/hugepages.sh@92 -- # local surp 00:04:02.456 21:26:22 -- setup/hugepages.sh@93 -- # local resv 00:04:02.456 21:26:22 -- setup/hugepages.sh@94 -- # local anon 00:04:02.456 21:26:22 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.456 21:26:22 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.456 21:26:22 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.456 21:26:22 -- setup/common.sh@18 -- # local node= 00:04:02.456 21:26:22 -- setup/common.sh@19 -- # local var val 00:04:02.456 21:26:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.456 21:26:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.456 21:26:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.456 21:26:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.456 21:26:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.456 21:26:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4991532 kB' 'MemAvailable: 9373300 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412308 kB' 'Inactive: 4235612 kB' 'Active(anon): 125364 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142920 kB' 'Mapped: 58320 kB' 'Shmem: 2596 kB' 'KReclaimable: 180764 kB' 'Slab: 261032 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 5048 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.456 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.456 21:26:22 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.457 21:26:22 -- setup/common.sh@33 -- # echo 0 00:04:02.457 21:26:22 -- setup/common.sh@33 -- # return 0 00:04:02.457 21:26:22 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.457 21:26:22 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.457 21:26:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.457 21:26:22 -- setup/common.sh@18 -- # local node= 00:04:02.457 21:26:22 -- setup/common.sh@19 -- # local var val 00:04:02.457 21:26:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.457 21:26:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.457 21:26:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.457 21:26:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.457 21:26:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.457 21:26:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.457 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.457 21:26:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4991532 kB' 'MemAvailable: 9373300 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411820 kB' 'Inactive: 4235612 kB' 'Active(anon): 124876 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142428 kB' 'Mapped: 58296 kB' 'Shmem: 2596 kB' 'KReclaimable: 180764 kB' 'Slab: 261024 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 5016 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:02.457 21:26:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.458 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.458 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.459 21:26:22 -- setup/common.sh@33 -- # echo 0 00:04:02.459 21:26:22 -- setup/common.sh@33 -- # return 0 00:04:02.459 21:26:22 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.459 21:26:22 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.459 21:26:22 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.459 21:26:22 -- setup/common.sh@18 -- # local node= 00:04:02.459 21:26:22 -- setup/common.sh@19 -- # local var val 00:04:02.459 21:26:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.459 21:26:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.459 21:26:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.459 21:26:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.459 21:26:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.459 21:26:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4991532 kB' 'MemAvailable: 9373300 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411820 kB' 'Inactive: 4235612 kB' 'Active(anon): 124876 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142688 kB' 'Mapped: 58296 kB' 'Shmem: 2596 kB' 'KReclaimable: 180764 kB' 'Slab: 261024 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 5016 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.459 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.459 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.460 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.460 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.461 21:26:22 -- setup/common.sh@33 -- # echo 0 00:04:02.461 21:26:22 -- setup/common.sh@33 -- # return 0 00:04:02.461 21:26:22 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.461 nr_hugepages=1024 00:04:02.461 resv_hugepages=0 00:04:02.461 21:26:22 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.461 21:26:22 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.461 surplus_hugepages=0 00:04:02.461 21:26:22 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.461 anon_hugepages=0 00:04:02.461 21:26:22 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.461 21:26:22 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.461 21:26:22 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.461 21:26:22 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.461 21:26:22 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.461 21:26:22 -- setup/common.sh@18 -- # local node= 00:04:02.461 21:26:22 -- setup/common.sh@19 -- # local var val 00:04:02.461 21:26:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.461 21:26:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.461 21:26:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.461 21:26:22 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.461 21:26:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.461 21:26:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4991784 kB' 'MemAvailable: 9373552 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411896 kB' 'Inactive: 4235612 kB' 'Active(anon): 124952 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142772 kB' 'Mapped: 58296 kB' 'Shmem: 2596 kB' 'KReclaimable: 180764 kB' 'Slab: 261020 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80256 kB' 'KernelStack: 5016 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.461 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.461 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.462 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.462 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.463 21:26:22 -- setup/common.sh@33 -- # echo 1024 00:04:02.463 21:26:22 -- setup/common.sh@33 -- # return 0 00:04:02.463 21:26:22 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.463 21:26:22 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.463 21:26:22 -- setup/hugepages.sh@27 -- # local node 00:04:02.463 21:26:22 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.463 21:26:22 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.463 21:26:22 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.463 21:26:22 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.463 21:26:22 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.463 21:26:22 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.463 21:26:22 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.463 21:26:22 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.463 21:26:22 -- setup/common.sh@18 -- # local node=0 00:04:02.463 21:26:22 -- setup/common.sh@19 -- # local var val 00:04:02.463 21:26:22 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.463 21:26:22 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.463 21:26:22 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.463 21:26:22 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.463 21:26:22 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.463 21:26:22 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 4992192 kB' 'MemUsed: 7254132 kB' 'SwapCached: 0 kB' 'Active: 411916 kB' 'Inactive: 4235612 kB' 'Active(anon): 124972 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 58296 kB' 'AnonPages: 142784 kB' 'Shmem: 2596 kB' 'KernelStack: 5032 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 261020 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.463 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.463 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # continue 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.464 21:26:22 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.464 21:26:22 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.464 21:26:22 -- setup/common.sh@33 -- # echo 0 00:04:02.464 21:26:22 -- setup/common.sh@33 -- # return 0 00:04:02.464 21:26:22 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.464 21:26:22 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.464 21:26:22 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.464 21:26:22 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.464 node0=1024 expecting 1024 00:04:02.464 21:26:22 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.464 21:26:22 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.464 00:04:02.464 real 0m0.944s 00:04:02.464 user 0m0.310s 00:04:02.464 sys 0m0.621s 00:04:02.464 21:26:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:02.464 ************************************ 00:04:02.464 END TEST default_setup 00:04:02.464 ************************************ 00:04:02.465 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:04:02.465 21:26:22 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:02.465 21:26:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.465 21:26:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.465 21:26:22 -- common/autotest_common.sh@10 -- # set +x 00:04:02.723 ************************************ 00:04:02.723 START TEST per_node_1G_alloc 00:04:02.723 ************************************ 00:04:02.723 21:26:22 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:02.723 21:26:22 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:02.723 21:26:22 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:02.723 21:26:22 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:02.723 21:26:22 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:02.723 21:26:22 -- setup/hugepages.sh@51 -- # shift 00:04:02.723 21:26:22 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:02.723 21:26:22 -- setup/hugepages.sh@52 -- # local node_ids 00:04:02.723 21:26:22 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.723 21:26:22 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:02.723 21:26:22 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:02.723 21:26:22 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:02.723 21:26:22 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.723 21:26:22 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:02.723 21:26:22 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.723 21:26:22 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.723 21:26:22 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.723 21:26:22 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:02.723 21:26:22 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:02.723 21:26:22 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:02.723 21:26:22 -- setup/hugepages.sh@73 -- # return 0 00:04:02.723 21:26:22 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:02.723 21:26:22 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:02.723 21:26:22 -- setup/hugepages.sh@146 -- # setup output 00:04:02.723 21:26:22 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.723 21:26:22 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.981 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:02.981 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.981 21:26:23 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:02.981 21:26:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:02.981 21:26:23 -- setup/hugepages.sh@89 -- # local node 00:04:02.982 21:26:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.982 21:26:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.982 21:26:23 -- setup/hugepages.sh@92 -- # local surp 00:04:02.982 21:26:23 -- setup/hugepages.sh@93 -- # local resv 00:04:02.982 21:26:23 -- setup/hugepages.sh@94 -- # local anon 00:04:02.982 21:26:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.982 21:26:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.982 21:26:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.982 21:26:23 -- setup/common.sh@18 -- # local node= 00:04:02.982 21:26:23 -- setup/common.sh@19 -- # local var val 00:04:02.982 21:26:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.982 21:26:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.982 21:26:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.982 21:26:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.982 21:26:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.244 21:26:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6063864 kB' 'MemAvailable: 10445632 kB' 'Buffers: 35104 kB' 'Cached: 4498792 kB' 'SwapCached: 0 kB' 'Active: 412056 kB' 'Inactive: 4235612 kB' 'Active(anon): 125112 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142936 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261036 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80272 kB' 'KernelStack: 5024 kB' 'PageTables: 4380 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.244 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.244 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.245 21:26:23 -- setup/common.sh@33 -- # echo 0 00:04:03.245 21:26:23 -- setup/common.sh@33 -- # return 0 00:04:03.245 21:26:23 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.245 21:26:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.245 21:26:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.245 21:26:23 -- setup/common.sh@18 -- # local node= 00:04:03.245 21:26:23 -- setup/common.sh@19 -- # local var val 00:04:03.245 21:26:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.245 21:26:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.245 21:26:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.245 21:26:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.245 21:26:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.245 21:26:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6064128 kB' 'MemAvailable: 10445896 kB' 'Buffers: 35104 kB' 'Cached: 4498792 kB' 'SwapCached: 0 kB' 'Active: 412244 kB' 'Inactive: 4235612 kB' 'Active(anon): 125300 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142936 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261032 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 5040 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 378352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.245 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.245 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.246 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.246 21:26:23 -- setup/common.sh@33 -- # echo 0 00:04:03.246 21:26:23 -- setup/common.sh@33 -- # return 0 00:04:03.246 21:26:23 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.246 21:26:23 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.246 21:26:23 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.246 21:26:23 -- setup/common.sh@18 -- # local node= 00:04:03.246 21:26:23 -- setup/common.sh@19 -- # local var val 00:04:03.246 21:26:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.246 21:26:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.246 21:26:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.246 21:26:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.246 21:26:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.246 21:26:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.246 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6064036 kB' 'MemAvailable: 10445804 kB' 'Buffers: 35104 kB' 'Cached: 4498792 kB' 'SwapCached: 0 kB' 'Active: 412052 kB' 'Inactive: 4235612 kB' 'Active(anon): 125108 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142704 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261020 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80256 kB' 'KernelStack: 4992 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.247 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.247 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.248 21:26:23 -- setup/common.sh@33 -- # echo 0 00:04:03.248 21:26:23 -- setup/common.sh@33 -- # return 0 00:04:03.248 21:26:23 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.248 nr_hugepages=512 00:04:03.248 21:26:23 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:03.248 resv_hugepages=0 00:04:03.248 21:26:23 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.248 surplus_hugepages=0 00:04:03.248 21:26:23 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.248 anon_hugepages=0 00:04:03.248 21:26:23 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.248 21:26:23 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.248 21:26:23 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:03.248 21:26:23 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.248 21:26:23 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.248 21:26:23 -- setup/common.sh@18 -- # local node= 00:04:03.248 21:26:23 -- setup/common.sh@19 -- # local var val 00:04:03.248 21:26:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.248 21:26:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.248 21:26:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.248 21:26:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.248 21:26:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.248 21:26:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6064036 kB' 'MemAvailable: 10445804 kB' 'Buffers: 35104 kB' 'Cached: 4498792 kB' 'SwapCached: 0 kB' 'Active: 411844 kB' 'Inactive: 4235612 kB' 'Active(anon): 124900 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142740 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261024 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 4992 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.248 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.248 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.250 21:26:23 -- setup/common.sh@33 -- # echo 512 00:04:03.250 21:26:23 -- setup/common.sh@33 -- # return 0 00:04:03.250 21:26:23 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:03.250 21:26:23 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.250 21:26:23 -- setup/hugepages.sh@27 -- # local node 00:04:03.250 21:26:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.250 21:26:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.250 21:26:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.250 21:26:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.250 21:26:23 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.250 21:26:23 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.250 21:26:23 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.250 21:26:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.250 21:26:23 -- setup/common.sh@18 -- # local node=0 00:04:03.250 21:26:23 -- setup/common.sh@19 -- # local var val 00:04:03.250 21:26:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.250 21:26:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.250 21:26:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.250 21:26:23 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.250 21:26:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.250 21:26:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6064036 kB' 'MemUsed: 6182288 kB' 'SwapCached: 0 kB' 'Active: 412076 kB' 'Inactive: 4235612 kB' 'Active(anon): 125132 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4533896 kB' 'Mapped: 58296 kB' 'AnonPages: 142700 kB' 'Shmem: 2592 kB' 'KernelStack: 4960 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 261024 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80260 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.250 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # continue 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 21:26:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 21:26:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 21:26:23 -- setup/common.sh@33 -- # echo 0 00:04:03.251 21:26:23 -- setup/common.sh@33 -- # return 0 00:04:03.251 21:26:23 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.251 21:26:23 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.251 21:26:23 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.251 node0=512 expecting 512 00:04:03.251 21:26:23 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:03.251 21:26:23 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:03.251 00:04:03.251 real 0m0.685s 00:04:03.251 user 0m0.263s 00:04:03.251 sys 0m0.465s 00:04:03.251 21:26:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:03.251 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:04:03.251 ************************************ 00:04:03.251 END TEST per_node_1G_alloc 00:04:03.251 ************************************ 00:04:03.251 21:26:23 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:03.251 21:26:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.251 21:26:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.251 21:26:23 -- common/autotest_common.sh@10 -- # set +x 00:04:03.251 ************************************ 00:04:03.251 START TEST even_2G_alloc 00:04:03.251 ************************************ 00:04:03.251 21:26:23 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:03.251 21:26:23 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:03.251 21:26:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.251 21:26:23 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.251 21:26:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.251 21:26:23 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.251 21:26:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.251 21:26:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.251 21:26:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.251 21:26:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.251 21:26:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.251 21:26:23 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:03.251 21:26:23 -- setup/hugepages.sh@83 -- # : 0 00:04:03.251 21:26:23 -- setup/hugepages.sh@84 -- # : 0 00:04:03.251 21:26:23 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.251 21:26:23 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:03.251 21:26:23 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:03.251 21:26:23 -- setup/hugepages.sh@153 -- # setup output 00:04:03.251 21:26:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.251 21:26:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:03.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:03.770 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.033 21:26:24 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:04.033 21:26:24 -- setup/hugepages.sh@89 -- # local node 00:04:04.033 21:26:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.033 21:26:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.033 21:26:24 -- setup/hugepages.sh@92 -- # local surp 00:04:04.033 21:26:24 -- setup/hugepages.sh@93 -- # local resv 00:04:04.033 21:26:24 -- setup/hugepages.sh@94 -- # local anon 00:04:04.033 21:26:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.033 21:26:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.033 21:26:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.033 21:26:24 -- setup/common.sh@18 -- # local node= 00:04:04.033 21:26:24 -- setup/common.sh@19 -- # local var val 00:04:04.033 21:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.033 21:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.033 21:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.033 21:26:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.033 21:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.033 21:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5024024 kB' 'MemAvailable: 9405792 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412296 kB' 'Inactive: 4235612 kB' 'Active(anon): 125352 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 143156 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261032 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 5040 kB' 'PageTables: 4436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20072 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.033 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.033 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.034 21:26:24 -- setup/common.sh@33 -- # echo 0 00:04:04.034 21:26:24 -- setup/common.sh@33 -- # return 0 00:04:04.034 21:26:24 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.034 21:26:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.034 21:26:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.034 21:26:24 -- setup/common.sh@18 -- # local node= 00:04:04.034 21:26:24 -- setup/common.sh@19 -- # local var val 00:04:04.034 21:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.034 21:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.034 21:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.034 21:26:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.034 21:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.034 21:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.034 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.034 21:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5024024 kB' 'MemAvailable: 9405792 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412340 kB' 'Inactive: 4235612 kB' 'Active(anon): 125396 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142964 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261032 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80268 kB' 'KernelStack: 5056 kB' 'PageTables: 4496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.035 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.035 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.036 21:26:24 -- setup/common.sh@33 -- # echo 0 00:04:04.036 21:26:24 -- setup/common.sh@33 -- # return 0 00:04:04.036 21:26:24 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.036 21:26:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.036 21:26:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.036 21:26:24 -- setup/common.sh@18 -- # local node= 00:04:04.036 21:26:24 -- setup/common.sh@19 -- # local var val 00:04:04.036 21:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.036 21:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.036 21:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.036 21:26:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.036 21:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.036 21:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5024284 kB' 'MemAvailable: 9406052 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412316 kB' 'Inactive: 4235612 kB' 'Active(anon): 125372 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142932 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261024 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80260 kB' 'KernelStack: 5056 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.036 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.036 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.037 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.037 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.037 21:26:24 -- setup/common.sh@33 -- # echo 0 00:04:04.037 21:26:24 -- setup/common.sh@33 -- # return 0 00:04:04.037 21:26:24 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.037 nr_hugepages=1024 00:04:04.037 21:26:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.037 resv_hugepages=0 00:04:04.037 21:26:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.037 surplus_hugepages=0 00:04:04.037 21:26:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.037 anon_hugepages=0 00:04:04.037 21:26:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.037 21:26:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.037 21:26:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.037 21:26:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.038 21:26:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.038 21:26:24 -- setup/common.sh@18 -- # local node= 00:04:04.038 21:26:24 -- setup/common.sh@19 -- # local var val 00:04:04.038 21:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.038 21:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.038 21:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.038 21:26:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.038 21:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.038 21:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5024284 kB' 'MemAvailable: 9406052 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411900 kB' 'Inactive: 4235612 kB' 'Active(anon): 124956 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142744 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261020 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80256 kB' 'KernelStack: 5056 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.038 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.038 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.039 21:26:24 -- setup/common.sh@33 -- # echo 1024 00:04:04.039 21:26:24 -- setup/common.sh@33 -- # return 0 00:04:04.039 21:26:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.039 21:26:24 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.039 21:26:24 -- setup/hugepages.sh@27 -- # local node 00:04:04.039 21:26:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.039 21:26:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.039 21:26:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.039 21:26:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.039 21:26:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.039 21:26:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.039 21:26:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.039 21:26:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.039 21:26:24 -- setup/common.sh@18 -- # local node=0 00:04:04.039 21:26:24 -- setup/common.sh@19 -- # local var val 00:04:04.039 21:26:24 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.039 21:26:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.039 21:26:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.039 21:26:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.039 21:26:24 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.039 21:26:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5024284 kB' 'MemUsed: 7222040 kB' 'SwapCached: 0 kB' 'Active: 411900 kB' 'Inactive: 4235612 kB' 'Active(anon): 124956 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235612 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 58296 kB' 'AnonPages: 142744 kB' 'Shmem: 2592 kB' 'KernelStack: 5056 kB' 'PageTables: 4492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 261020 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.039 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.039 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # continue 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.040 21:26:24 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.040 21:26:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.040 21:26:24 -- setup/common.sh@33 -- # echo 0 00:04:04.040 21:26:24 -- setup/common.sh@33 -- # return 0 00:04:04.040 21:26:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.040 21:26:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.040 21:26:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.040 21:26:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.040 node0=1024 expecting 1024 00:04:04.040 21:26:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.040 21:26:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.040 00:04:04.040 real 0m0.795s 00:04:04.040 user 0m0.252s 00:04:04.040 sys 0m0.582s 00:04:04.040 21:26:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:04.040 21:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:04.040 ************************************ 00:04:04.040 END TEST even_2G_alloc 00:04:04.040 ************************************ 00:04:04.040 21:26:24 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:04.040 21:26:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.040 21:26:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.040 21:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:04.300 ************************************ 00:04:04.300 START TEST odd_alloc 00:04:04.300 ************************************ 00:04:04.300 21:26:24 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:04.300 21:26:24 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:04.300 21:26:24 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:04.300 21:26:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:04.300 21:26:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.300 21:26:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.300 21:26:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.300 21:26:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:04.300 21:26:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.300 21:26:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.300 21:26:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.300 21:26:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:04.300 21:26:24 -- setup/hugepages.sh@83 -- # : 0 00:04:04.300 21:26:24 -- setup/hugepages.sh@84 -- # : 0 00:04:04.300 21:26:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.300 21:26:24 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:04.300 21:26:24 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:04.300 21:26:24 -- setup/hugepages.sh@160 -- # setup output 00:04:04.300 21:26:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.300 21:26:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:04.559 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.827 21:26:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.827 21:26:25 -- setup/hugepages.sh@89 -- # local node 00:04:04.827 21:26:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.827 21:26:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.827 21:26:25 -- setup/hugepages.sh@92 -- # local surp 00:04:04.827 21:26:25 -- setup/hugepages.sh@93 -- # local resv 00:04:04.827 21:26:25 -- setup/hugepages.sh@94 -- # local anon 00:04:04.827 21:26:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.827 21:26:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.827 21:26:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.827 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:04.827 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:04.827 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.827 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.827 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.827 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.827 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.827 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5023716 kB' 'MemAvailable: 9405492 kB' 'Buffers: 35104 kB' 'Cached: 4498800 kB' 'SwapCached: 0 kB' 'Active: 412324 kB' 'Inactive: 4235620 kB' 'Active(anon): 125380 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235620 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142924 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261056 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80292 kB' 'KernelStack: 5072 kB' 'PageTables: 4568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.827 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.827 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.828 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:04.828 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:04.828 21:26:25 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.828 21:26:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.828 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.828 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:04.828 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:04.828 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.828 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.828 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.828 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.828 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.828 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5022964 kB' 'MemAvailable: 9404740 kB' 'Buffers: 35104 kB' 'Cached: 4498800 kB' 'SwapCached: 0 kB' 'Active: 412148 kB' 'Inactive: 4235620 kB' 'Active(anon): 125204 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235620 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142716 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261056 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80292 kB' 'KernelStack: 5056 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20104 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.828 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:04.828 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:04.828 21:26:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.828 21:26:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.828 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.828 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:04.828 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:04.828 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.828 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.828 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.828 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.828 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.828 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5023172 kB' 'MemAvailable: 9404944 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411892 kB' 'Inactive: 4235616 kB' 'Active(anon): 124948 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142780 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261060 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80296 kB' 'KernelStack: 4992 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.828 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.828 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.829 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:04.829 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:04.829 21:26:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.829 nr_hugepages=1025 00:04:04.829 21:26:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.829 resv_hugepages=0 00:04:04.829 21:26:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.829 surplus_hugepages=0 00:04:04.829 21:26:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.829 anon_hugepages=0 00:04:04.829 21:26:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.829 21:26:25 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.829 21:26:25 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.829 21:26:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.829 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.829 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:04.829 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:04.829 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.829 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.829 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.829 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.829 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.829 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5023172 kB' 'MemAvailable: 9404944 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411960 kB' 'Inactive: 4235616 kB' 'Active(anon): 125016 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142808 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 261048 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80284 kB' 'KernelStack: 5008 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5073560 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20120 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.829 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.829 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.830 21:26:25 -- setup/common.sh@33 -- # echo 1025 00:04:04.830 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:04.830 21:26:25 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.830 21:26:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.830 21:26:25 -- setup/hugepages.sh@27 -- # local node 00:04:04.830 21:26:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.830 21:26:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:04.830 21:26:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.830 21:26:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.830 21:26:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.830 21:26:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.830 21:26:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.830 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.830 21:26:25 -- setup/common.sh@18 -- # local node=0 00:04:04.830 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:04.830 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.830 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.830 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.830 21:26:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.830 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.830 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5023864 kB' 'MemUsed: 7222460 kB' 'SwapCached: 0 kB' 'Active: 412224 kB' 'Inactive: 4235616 kB' 'Active(anon): 125280 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 58296 kB' 'AnonPages: 142812 kB' 'Shmem: 2592 kB' 'KernelStack: 5008 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 261048 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # continue 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.830 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.830 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.088 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.088 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.089 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.089 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.089 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:05.089 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:05.089 21:26:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.089 21:26:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.089 21:26:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.089 21:26:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.089 21:26:25 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:05.089 node0=1025 expecting 1025 00:04:05.089 21:26:25 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:05.089 00:04:05.089 real 0m0.807s 00:04:05.089 user 0m0.239s 00:04:05.089 sys 0m0.612s 00:04:05.089 21:26:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.089 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:04:05.089 ************************************ 00:04:05.089 END TEST odd_alloc 00:04:05.089 ************************************ 00:04:05.089 21:26:25 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:05.089 21:26:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.089 21:26:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.089 21:26:25 -- common/autotest_common.sh@10 -- # set +x 00:04:05.089 ************************************ 00:04:05.089 START TEST custom_alloc 00:04:05.089 ************************************ 00:04:05.089 21:26:25 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:05.089 21:26:25 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:05.089 21:26:25 -- setup/hugepages.sh@169 -- # local node 00:04:05.089 21:26:25 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:05.089 21:26:25 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:05.089 21:26:25 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:05.089 21:26:25 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:05.089 21:26:25 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:05.089 21:26:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:05.089 21:26:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.089 21:26:25 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:05.089 21:26:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:05.089 21:26:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.089 21:26:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.089 21:26:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.089 21:26:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.089 21:26:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.089 21:26:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.089 21:26:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:05.090 21:26:25 -- setup/hugepages.sh@83 -- # : 0 00:04:05.090 21:26:25 -- setup/hugepages.sh@84 -- # : 0 00:04:05.090 21:26:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:05.090 21:26:25 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:05.090 21:26:25 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:05.090 21:26:25 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:05.090 21:26:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:05.090 21:26:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.090 21:26:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:05.090 21:26:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.090 21:26:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.090 21:26:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.090 21:26:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:05.090 21:26:25 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:05.090 21:26:25 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:05.090 21:26:25 -- setup/hugepages.sh@78 -- # return 0 00:04:05.090 21:26:25 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:05.090 21:26:25 -- setup/hugepages.sh@187 -- # setup output 00:04:05.090 21:26:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.090 21:26:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:05.347 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.608 21:26:25 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:05.608 21:26:25 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:05.608 21:26:25 -- setup/hugepages.sh@89 -- # local node 00:04:05.608 21:26:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.608 21:26:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.608 21:26:25 -- setup/hugepages.sh@92 -- # local surp 00:04:05.608 21:26:25 -- setup/hugepages.sh@93 -- # local resv 00:04:05.608 21:26:25 -- setup/hugepages.sh@94 -- # local anon 00:04:05.608 21:26:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.608 21:26:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.608 21:26:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.608 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:05.608 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:05.608 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.608 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.608 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.608 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.608 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.608 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6069596 kB' 'MemAvailable: 10451368 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412404 kB' 'Inactive: 4235616 kB' 'Active(anon): 125460 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 143236 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260992 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80228 kB' 'KernelStack: 5052 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20088 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.608 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.608 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.609 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:05.609 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:05.609 21:26:25 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.609 21:26:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.609 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.609 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:05.609 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:05.609 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.609 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.609 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.609 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.609 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.609 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6069596 kB' 'MemAvailable: 10451368 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412396 kB' 'Inactive: 4235616 kB' 'Active(anon): 125452 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142972 kB' 'Mapped: 58320 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260988 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80224 kB' 'KernelStack: 5020 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.609 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.609 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.610 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:05.610 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:05.610 21:26:25 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.610 21:26:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.610 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.610 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:05.610 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:05.610 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.610 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.610 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.610 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.610 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.610 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6069848 kB' 'MemAvailable: 10451620 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412160 kB' 'Inactive: 4235616 kB' 'Active(anon): 125216 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142704 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260984 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80220 kB' 'KernelStack: 4988 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.610 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.610 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.611 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.611 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.611 21:26:25 -- setup/common.sh@33 -- # echo 0 00:04:05.611 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:05.611 21:26:25 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.611 nr_hugepages=512 00:04:05.611 21:26:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.611 resv_hugepages=0 00:04:05.611 21:26:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.611 surplus_hugepages=0 00:04:05.611 21:26:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.611 anon_hugepages=0 00:04:05.611 21:26:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.611 21:26:25 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.611 21:26:25 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.611 21:26:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.611 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.611 21:26:25 -- setup/common.sh@18 -- # local node= 00:04:05.611 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:05.611 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.611 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.611 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.612 21:26:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.612 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.612 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6070492 kB' 'MemAvailable: 10452264 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 412144 kB' 'Inactive: 4235616 kB' 'Active(anon): 125200 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 142688 kB' 'Mapped: 58296 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260976 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80212 kB' 'KernelStack: 4972 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5598872 kB' 'Committed_AS: 375544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.612 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.612 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.613 21:26:25 -- setup/common.sh@33 -- # echo 512 00:04:05.613 21:26:25 -- setup/common.sh@33 -- # return 0 00:04:05.613 21:26:25 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.613 21:26:25 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.613 21:26:25 -- setup/hugepages.sh@27 -- # local node 00:04:05.613 21:26:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.613 21:26:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.613 21:26:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.613 21:26:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.613 21:26:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.613 21:26:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.613 21:26:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.613 21:26:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.613 21:26:25 -- setup/common.sh@18 -- # local node=0 00:04:05.613 21:26:25 -- setup/common.sh@19 -- # local var val 00:04:05.613 21:26:25 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.613 21:26:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.613 21:26:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.613 21:26:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.613 21:26:25 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.613 21:26:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 6070252 kB' 'MemUsed: 6176072 kB' 'SwapCached: 0 kB' 'Active: 412144 kB' 'Inactive: 4235616 kB' 'Active(anon): 125200 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 58296 kB' 'AnonPages: 142948 kB' 'Shmem: 2592 kB' 'KernelStack: 4972 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 260976 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:25 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:25 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.613 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.613 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # continue 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.614 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.614 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.614 21:26:26 -- setup/common.sh@33 -- # echo 0 00:04:05.614 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:05.614 21:26:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.614 21:26:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.614 21:26:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.614 21:26:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.614 node0=512 expecting 512 00:04:05.614 21:26:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.614 21:26:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.614 00:04:05.614 real 0m0.627s 00:04:05.614 user 0m0.238s 00:04:05.614 sys 0m0.429s 00:04:05.614 21:26:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:05.614 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:04:05.614 ************************************ 00:04:05.614 END TEST custom_alloc 00:04:05.614 ************************************ 00:04:05.614 21:26:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.614 21:26:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.614 21:26:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.614 21:26:26 -- common/autotest_common.sh@10 -- # set +x 00:04:05.614 ************************************ 00:04:05.614 START TEST no_shrink_alloc 00:04:05.614 ************************************ 00:04:05.614 21:26:26 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:05.614 21:26:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.614 21:26:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.614 21:26:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.614 21:26:26 -- setup/hugepages.sh@51 -- # shift 00:04:05.614 21:26:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.614 21:26:26 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.614 21:26:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.614 21:26:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.614 21:26:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.614 21:26:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.614 21:26:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.614 21:26:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.614 21:26:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.614 21:26:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.614 21:26:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.614 21:26:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.614 21:26:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.614 21:26:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.614 21:26:26 -- setup/hugepages.sh@73 -- # return 0 00:04:05.614 21:26:26 -- setup/hugepages.sh@198 -- # setup output 00:04:05.614 21:26:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.614 21:26:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.890 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:06.149 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.415 21:26:26 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.415 21:26:26 -- setup/hugepages.sh@89 -- # local node 00:04:06.415 21:26:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.415 21:26:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.415 21:26:26 -- setup/hugepages.sh@92 -- # local surp 00:04:06.415 21:26:26 -- setup/hugepages.sh@93 -- # local resv 00:04:06.415 21:26:26 -- setup/hugepages.sh@94 -- # local anon 00:04:06.415 21:26:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.415 21:26:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.415 21:26:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.415 21:26:26 -- setup/common.sh@18 -- # local node= 00:04:06.415 21:26:26 -- setup/common.sh@19 -- # local var val 00:04:06.415 21:26:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.415 21:26:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.415 21:26:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.415 21:26:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.415 21:26:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.415 21:26:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5033160 kB' 'MemAvailable: 9414932 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410888 kB' 'Inactive: 4235616 kB' 'Active(anon): 123944 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 141524 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260876 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80112 kB' 'KernelStack: 4960 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19992 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.415 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.415 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.416 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.416 21:26:26 -- setup/common.sh@33 -- # echo 0 00:04:06.416 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:06.416 21:26:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.416 21:26:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.416 21:26:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.416 21:26:26 -- setup/common.sh@18 -- # local node= 00:04:06.416 21:26:26 -- setup/common.sh@19 -- # local var val 00:04:06.416 21:26:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.416 21:26:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.416 21:26:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.416 21:26:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.416 21:26:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.416 21:26:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.416 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5033160 kB' 'MemAvailable: 9414932 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410748 kB' 'Inactive: 4235616 kB' 'Active(anon): 123804 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 141392 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260872 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80108 kB' 'KernelStack: 4928 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.417 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.417 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.418 21:26:26 -- setup/common.sh@33 -- # echo 0 00:04:06.418 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:06.418 21:26:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.418 21:26:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.418 21:26:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.418 21:26:26 -- setup/common.sh@18 -- # local node= 00:04:06.418 21:26:26 -- setup/common.sh@19 -- # local var val 00:04:06.418 21:26:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.418 21:26:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.418 21:26:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.418 21:26:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.418 21:26:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.418 21:26:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5033160 kB' 'MemAvailable: 9414932 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410772 kB' 'Inactive: 4235616 kB' 'Active(anon): 123828 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 141388 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260872 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80108 kB' 'KernelStack: 4944 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.418 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.418 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.419 21:26:26 -- setup/common.sh@33 -- # echo 0 00:04:06.419 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:06.419 21:26:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.419 nr_hugepages=1024 00:04:06.419 resv_hugepages=0 00:04:06.419 surplus_hugepages=0 00:04:06.419 anon_hugepages=0 00:04:06.419 21:26:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.419 21:26:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.419 21:26:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.419 21:26:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.419 21:26:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.419 21:26:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.419 21:26:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.419 21:26:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.419 21:26:26 -- setup/common.sh@18 -- # local node= 00:04:06.419 21:26:26 -- setup/common.sh@19 -- # local var val 00:04:06.419 21:26:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.419 21:26:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.419 21:26:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.419 21:26:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.419 21:26:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.419 21:26:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5033160 kB' 'MemAvailable: 9414932 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410788 kB' 'Inactive: 4235616 kB' 'Active(anon): 123844 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 141396 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260872 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80108 kB' 'KernelStack: 4944 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 19976 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.419 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.419 21:26:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.420 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.420 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.421 21:26:26 -- setup/common.sh@33 -- # echo 1024 00:04:06.421 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:06.421 21:26:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.421 21:26:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.421 21:26:26 -- setup/hugepages.sh@27 -- # local node 00:04:06.421 21:26:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.421 21:26:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.421 21:26:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.421 21:26:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.421 21:26:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.421 21:26:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.421 21:26:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.421 21:26:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.421 21:26:26 -- setup/common.sh@18 -- # local node=0 00:04:06.421 21:26:26 -- setup/common.sh@19 -- # local var val 00:04:06.421 21:26:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.421 21:26:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.421 21:26:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.421 21:26:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.421 21:26:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.421 21:26:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5033160 kB' 'MemUsed: 7213164 kB' 'SwapCached: 0 kB' 'Active: 410784 kB' 'Inactive: 4235616 kB' 'Active(anon): 123840 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 57420 kB' 'AnonPages: 141380 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 260872 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.421 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.421 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # continue 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.422 21:26:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.422 21:26:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.422 21:26:26 -- setup/common.sh@33 -- # echo 0 00:04:06.422 21:26:26 -- setup/common.sh@33 -- # return 0 00:04:06.422 21:26:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.422 21:26:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.422 21:26:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.422 21:26:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.422 node0=1024 expecting 1024 00:04:06.422 21:26:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.422 21:26:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.422 21:26:26 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:06.422 21:26:26 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:06.422 21:26:26 -- setup/hugepages.sh@202 -- # setup output 00:04:06.422 21:26:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.422 21:26:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:06.722 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.984 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:06.985 21:26:27 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:06.985 21:26:27 -- setup/hugepages.sh@89 -- # local node 00:04:06.985 21:26:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.985 21:26:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.985 21:26:27 -- setup/hugepages.sh@92 -- # local surp 00:04:06.985 21:26:27 -- setup/hugepages.sh@93 -- # local resv 00:04:06.985 21:26:27 -- setup/hugepages.sh@94 -- # local anon 00:04:06.985 21:26:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.985 21:26:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.985 21:26:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.985 21:26:27 -- setup/common.sh@18 -- # local node= 00:04:06.985 21:26:27 -- setup/common.sh@19 -- # local var val 00:04:06.985 21:26:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.985 21:26:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.985 21:26:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.985 21:26:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.985 21:26:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.985 21:26:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5036156 kB' 'MemAvailable: 9417928 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 411024 kB' 'Inactive: 4235616 kB' 'Active(anon): 124080 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141620 kB' 'Mapped: 57404 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260872 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80108 kB' 'KernelStack: 4900 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20056 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.985 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.985 21:26:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.986 21:26:27 -- setup/common.sh@33 -- # echo 0 00:04:06.986 21:26:27 -- setup/common.sh@33 -- # return 0 00:04:06.986 21:26:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.986 21:26:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.986 21:26:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.986 21:26:27 -- setup/common.sh@18 -- # local node= 00:04:06.986 21:26:27 -- setup/common.sh@19 -- # local var val 00:04:06.986 21:26:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.986 21:26:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.986 21:26:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.986 21:26:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.986 21:26:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.986 21:26:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5035904 kB' 'MemAvailable: 9417676 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410484 kB' 'Inactive: 4235616 kB' 'Active(anon): 123540 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141312 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260884 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80120 kB' 'KernelStack: 4944 kB' 'PageTables: 4044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20024 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.986 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.986 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.987 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.987 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.988 21:26:27 -- setup/common.sh@33 -- # echo 0 00:04:06.988 21:26:27 -- setup/common.sh@33 -- # return 0 00:04:06.988 21:26:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.988 21:26:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.988 21:26:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.988 21:26:27 -- setup/common.sh@18 -- # local node= 00:04:06.988 21:26:27 -- setup/common.sh@19 -- # local var val 00:04:06.988 21:26:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.988 21:26:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.988 21:26:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.988 21:26:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.988 21:26:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.988 21:26:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5035904 kB' 'MemAvailable: 9417676 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410444 kB' 'Inactive: 4235616 kB' 'Active(anon): 123500 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141272 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260884 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80120 kB' 'KernelStack: 4928 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.988 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.988 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.989 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.989 21:26:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.990 21:26:27 -- setup/common.sh@33 -- # echo 0 00:04:06.990 21:26:27 -- setup/common.sh@33 -- # return 0 00:04:06.990 nr_hugepages=1024 00:04:06.990 resv_hugepages=0 00:04:06.990 surplus_hugepages=0 00:04:06.990 anon_hugepages=0 00:04:06.990 21:26:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.990 21:26:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.990 21:26:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.990 21:26:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.990 21:26:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.990 21:26:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.990 21:26:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.990 21:26:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.990 21:26:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.990 21:26:27 -- setup/common.sh@18 -- # local node= 00:04:06.990 21:26:27 -- setup/common.sh@19 -- # local var val 00:04:06.990 21:26:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.990 21:26:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.990 21:26:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.990 21:26:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.990 21:26:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.990 21:26:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5035904 kB' 'MemAvailable: 9417676 kB' 'Buffers: 35104 kB' 'Cached: 4498796 kB' 'SwapCached: 0 kB' 'Active: 410520 kB' 'Inactive: 4235616 kB' 'Active(anon): 123576 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 141388 kB' 'Mapped: 57420 kB' 'Shmem: 2592 kB' 'KReclaimable: 180764 kB' 'Slab: 260884 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80120 kB' 'KernelStack: 4944 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5074584 kB' 'Committed_AS: 364176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 20040 kB' 'VmallocChunk: 0 kB' 'Percpu: 6720 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 149356 kB' 'DirectMap2M: 4044800 kB' 'DirectMap1G: 10485760 kB' 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.990 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.990 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.991 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.991 21:26:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.992 21:26:27 -- setup/common.sh@33 -- # echo 1024 00:04:06.992 21:26:27 -- setup/common.sh@33 -- # return 0 00:04:06.992 21:26:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.992 21:26:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.992 21:26:27 -- setup/hugepages.sh@27 -- # local node 00:04:06.992 21:26:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.992 21:26:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.992 21:26:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.992 21:26:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.992 21:26:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.992 21:26:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.992 21:26:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.992 21:26:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.992 21:26:27 -- setup/common.sh@18 -- # local node=0 00:04:06.992 21:26:27 -- setup/common.sh@19 -- # local var val 00:04:06.992 21:26:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.992 21:26:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.992 21:26:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.992 21:26:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.992 21:26:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.992 21:26:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12246324 kB' 'MemFree: 5035904 kB' 'MemUsed: 7210420 kB' 'SwapCached: 0 kB' 'Active: 410460 kB' 'Inactive: 4235616 kB' 'Active(anon): 123516 kB' 'Inactive(anon): 0 kB' 'Active(file): 286944 kB' 'Inactive(file): 4235616 kB' 'Unevictable: 28816 kB' 'Mlocked: 27280 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4533900 kB' 'Mapped: 57420 kB' 'AnonPages: 141284 kB' 'Shmem: 2592 kB' 'KernelStack: 4928 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 180764 kB' 'Slab: 260884 kB' 'SReclaimable: 180764 kB' 'SUnreclaim: 80120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.992 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.992 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # continue 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.993 21:26:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.993 21:26:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.993 21:26:27 -- setup/common.sh@33 -- # echo 0 00:04:06.993 21:26:27 -- setup/common.sh@33 -- # return 0 00:04:06.993 21:26:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.993 21:26:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.993 21:26:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.993 21:26:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.993 21:26:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.993 node0=1024 expecting 1024 00:04:06.993 21:26:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.993 00:04:06.993 real 0m1.342s 00:04:06.993 user 0m0.509s 00:04:06.993 sys 0m0.864s 00:04:06.993 21:26:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.993 21:26:27 -- common/autotest_common.sh@10 -- # set +x 00:04:06.993 ************************************ 00:04:06.993 END TEST no_shrink_alloc 00:04:06.993 ************************************ 00:04:06.993 21:26:27 -- setup/hugepages.sh@217 -- # clear_hp 00:04:06.993 21:26:27 -- setup/hugepages.sh@37 -- # local node hp 00:04:06.993 21:26:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:06.993 21:26:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.993 21:26:27 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.993 21:26:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:06.993 21:26:27 -- setup/hugepages.sh@41 -- # echo 0 00:04:06.993 21:26:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:06.993 21:26:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:06.993 ************************************ 00:04:06.993 END TEST hugepages 00:04:06.993 ************************************ 00:04:06.993 00:04:06.993 real 0m5.723s 00:04:06.993 user 0m2.031s 00:04:06.993 sys 0m3.876s 00:04:06.994 21:26:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.994 21:26:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 21:26:27 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.253 21:26:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.253 21:26:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.253 ************************************ 00:04:07.253 START TEST driver 00:04:07.253 ************************************ 00:04:07.253 21:26:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.253 * Looking for test storage... 00:04:07.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.253 21:26:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.253 21:26:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.253 21:26:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.253 21:26:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.253 21:26:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.253 21:26:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.253 21:26:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.253 21:26:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.253 21:26:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.253 21:26:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.253 21:26:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.253 21:26:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.253 21:26:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.253 21:26:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.253 21:26:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.253 21:26:27 -- scripts/common.sh@344 -- # : 1 00:04:07.253 21:26:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.253 21:26:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.253 21:26:27 -- scripts/common.sh@364 -- # decimal 1 00:04:07.253 21:26:27 -- scripts/common.sh@352 -- # local d=1 00:04:07.253 21:26:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.253 21:26:27 -- scripts/common.sh@354 -- # echo 1 00:04:07.253 21:26:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.253 21:26:27 -- scripts/common.sh@365 -- # decimal 2 00:04:07.253 21:26:27 -- scripts/common.sh@352 -- # local d=2 00:04:07.253 21:26:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.253 21:26:27 -- scripts/common.sh@354 -- # echo 2 00:04:07.253 21:26:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.253 21:26:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.253 21:26:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.253 21:26:27 -- scripts/common.sh@367 -- # return 0 00:04:07.253 21:26:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.253 --rc genhtml_branch_coverage=1 00:04:07.253 --rc genhtml_function_coverage=1 00:04:07.253 --rc genhtml_legend=1 00:04:07.253 --rc geninfo_all_blocks=1 00:04:07.253 --rc geninfo_unexecuted_blocks=1 00:04:07.253 00:04:07.253 ' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.253 --rc genhtml_branch_coverage=1 00:04:07.253 --rc genhtml_function_coverage=1 00:04:07.253 --rc genhtml_legend=1 00:04:07.253 --rc geninfo_all_blocks=1 00:04:07.253 --rc geninfo_unexecuted_blocks=1 00:04:07.253 00:04:07.253 ' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.253 --rc genhtml_branch_coverage=1 00:04:07.253 --rc genhtml_function_coverage=1 00:04:07.253 --rc genhtml_legend=1 00:04:07.253 --rc geninfo_all_blocks=1 00:04:07.253 --rc geninfo_unexecuted_blocks=1 00:04:07.253 00:04:07.253 ' 00:04:07.253 21:26:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.254 --rc genhtml_branch_coverage=1 00:04:07.254 --rc genhtml_function_coverage=1 00:04:07.254 --rc genhtml_legend=1 00:04:07.254 --rc geninfo_all_blocks=1 00:04:07.254 --rc geninfo_unexecuted_blocks=1 00:04:07.254 00:04:07.254 ' 00:04:07.254 21:26:27 -- setup/driver.sh@68 -- # setup reset 00:04:07.254 21:26:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.254 21:26:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.822 21:26:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:07.822 21:26:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.822 21:26:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.822 21:26:28 -- common/autotest_common.sh@10 -- # set +x 00:04:07.822 ************************************ 00:04:07.822 START TEST guess_driver 00:04:07.822 ************************************ 00:04:07.822 21:26:28 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:07.822 21:26:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:07.822 21:26:28 -- setup/driver.sh@47 -- # local fail=0 00:04:07.822 21:26:28 -- setup/driver.sh@49 -- # pick_driver 00:04:07.822 21:26:28 -- setup/driver.sh@36 -- # vfio 00:04:07.822 21:26:28 -- setup/driver.sh@21 -- # local iommu_grups 00:04:07.822 21:26:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:07.822 21:26:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:07.822 21:26:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:07.822 21:26:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:07.822 21:26:28 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:07.822 21:26:28 -- setup/driver.sh@32 -- # return 1 00:04:07.822 21:26:28 -- setup/driver.sh@38 -- # uio 00:04:07.822 21:26:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio.ko.zst 00:04:07.822 insmod /lib/modules/6.8.0-36-generic/kernel/drivers/uio/uio_pci_generic.ko.zst == *\.\k\o* ]] 00:04:07.822 21:26:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:07.822 Looking for driver=uio_pci_generic 00:04:07.822 21:26:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:07.822 21:26:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:07.822 21:26:28 -- setup/driver.sh@45 -- # setup output config 00:04:07.822 21:26:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.822 21:26:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:08.081 21:26:28 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:08.081 21:26:28 -- setup/driver.sh@58 -- # continue 00:04:08.081 21:26:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.340 21:26:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:08.340 21:26:28 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:08.340 21:26:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:08.909 21:26:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:08.909 21:26:29 -- setup/driver.sh@65 -- # setup reset 00:04:08.909 21:26:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.909 21:26:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:09.479 00:04:09.479 real 0m1.574s 00:04:09.479 user 0m0.318s 00:04:09.479 sys 0m1.293s 00:04:09.479 21:26:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.479 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.479 ************************************ 00:04:09.479 END TEST guess_driver 00:04:09.479 ************************************ 00:04:09.479 00:04:09.479 real 0m2.239s 00:04:09.479 user 0m0.580s 00:04:09.479 sys 0m1.767s 00:04:09.479 21:26:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.479 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.479 ************************************ 00:04:09.479 END TEST driver 00:04:09.479 ************************************ 00:04:09.479 21:26:29 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:09.479 21:26:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.479 21:26:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.479 ************************************ 00:04:09.479 START TEST devices 00:04:09.479 ************************************ 00:04:09.479 21:26:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:09.479 * Looking for test storage... 00:04:09.479 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:09.479 21:26:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:09.479 21:26:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:09.479 21:26:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:09.479 21:26:29 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:09.479 21:26:29 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:09.479 21:26:29 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:09.479 21:26:29 -- scripts/common.sh@335 -- # IFS=.-: 00:04:09.479 21:26:29 -- scripts/common.sh@335 -- # read -ra ver1 00:04:09.479 21:26:29 -- scripts/common.sh@336 -- # IFS=.-: 00:04:09.479 21:26:29 -- scripts/common.sh@336 -- # read -ra ver2 00:04:09.479 21:26:29 -- scripts/common.sh@337 -- # local 'op=<' 00:04:09.479 21:26:29 -- scripts/common.sh@339 -- # ver1_l=2 00:04:09.479 21:26:29 -- scripts/common.sh@340 -- # ver2_l=1 00:04:09.479 21:26:29 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:09.479 21:26:29 -- scripts/common.sh@343 -- # case "$op" in 00:04:09.479 21:26:29 -- scripts/common.sh@344 -- # : 1 00:04:09.479 21:26:29 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:09.479 21:26:29 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:09.479 21:26:29 -- scripts/common.sh@364 -- # decimal 1 00:04:09.479 21:26:29 -- scripts/common.sh@352 -- # local d=1 00:04:09.479 21:26:29 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:09.479 21:26:29 -- scripts/common.sh@354 -- # echo 1 00:04:09.479 21:26:29 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:09.479 21:26:29 -- scripts/common.sh@365 -- # decimal 2 00:04:09.479 21:26:29 -- scripts/common.sh@352 -- # local d=2 00:04:09.479 21:26:29 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:09.479 21:26:29 -- scripts/common.sh@354 -- # echo 2 00:04:09.479 21:26:29 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:09.479 21:26:29 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:09.479 21:26:29 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:09.479 21:26:29 -- scripts/common.sh@367 -- # return 0 00:04:09.479 21:26:29 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.479 --rc genhtml_branch_coverage=1 00:04:09.479 --rc genhtml_function_coverage=1 00:04:09.479 --rc genhtml_legend=1 00:04:09.479 --rc geninfo_all_blocks=1 00:04:09.479 --rc geninfo_unexecuted_blocks=1 00:04:09.479 00:04:09.479 ' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.479 --rc genhtml_branch_coverage=1 00:04:09.479 --rc genhtml_function_coverage=1 00:04:09.479 --rc genhtml_legend=1 00:04:09.479 --rc geninfo_all_blocks=1 00:04:09.479 --rc geninfo_unexecuted_blocks=1 00:04:09.479 00:04:09.479 ' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.479 --rc genhtml_branch_coverage=1 00:04:09.479 --rc genhtml_function_coverage=1 00:04:09.479 --rc genhtml_legend=1 00:04:09.479 --rc geninfo_all_blocks=1 00:04:09.479 --rc geninfo_unexecuted_blocks=1 00:04:09.479 00:04:09.479 ' 00:04:09.479 21:26:29 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:09.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:09.479 --rc genhtml_branch_coverage=1 00:04:09.479 --rc genhtml_function_coverage=1 00:04:09.479 --rc genhtml_legend=1 00:04:09.479 --rc geninfo_all_blocks=1 00:04:09.479 --rc geninfo_unexecuted_blocks=1 00:04:09.479 00:04:09.479 ' 00:04:09.479 21:26:29 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:09.479 21:26:29 -- setup/devices.sh@192 -- # setup reset 00:04:09.479 21:26:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:09.479 21:26:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.051 21:26:30 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:10.051 21:26:30 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:10.051 21:26:30 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:10.051 21:26:30 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:10.051 21:26:30 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:10.051 21:26:30 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:10.051 21:26:30 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:10.051 21:26:30 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.051 21:26:30 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:10.051 21:26:30 -- setup/devices.sh@196 -- # blocks=() 00:04:10.051 21:26:30 -- setup/devices.sh@196 -- # declare -a blocks 00:04:10.051 21:26:30 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:10.051 21:26:30 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:10.051 21:26:30 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:10.051 21:26:30 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:10.051 21:26:30 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:10.051 21:26:30 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:10.051 21:26:30 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:10.051 21:26:30 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:10.051 21:26:30 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:10.051 21:26:30 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:10.051 21:26:30 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:10.051 No valid GPT data, bailing 00:04:10.051 21:26:30 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:10.051 21:26:30 -- scripts/common.sh@393 -- # pt= 00:04:10.051 21:26:30 -- scripts/common.sh@394 -- # return 1 00:04:10.051 21:26:30 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:10.051 21:26:30 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:10.051 21:26:30 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:10.051 21:26:30 -- setup/common.sh@80 -- # echo 5368709120 00:04:10.051 21:26:30 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:10.051 21:26:30 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:10.051 21:26:30 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:10.051 21:26:30 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:10.051 21:26:30 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:10.051 21:26:30 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:10.051 21:26:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.051 21:26:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.051 21:26:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.051 ************************************ 00:04:10.051 START TEST nvme_mount 00:04:10.051 ************************************ 00:04:10.051 21:26:30 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:10.051 21:26:30 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:10.051 21:26:30 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:10.051 21:26:30 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:10.051 21:26:30 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:10.051 21:26:30 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:10.051 21:26:30 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:10.051 21:26:30 -- setup/common.sh@40 -- # local part_no=1 00:04:10.051 21:26:30 -- setup/common.sh@41 -- # local size=1073741824 00:04:10.051 21:26:30 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:10.051 21:26:30 -- setup/common.sh@44 -- # parts=() 00:04:10.051 21:26:30 -- setup/common.sh@44 -- # local parts 00:04:10.051 21:26:30 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:10.051 21:26:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.051 21:26:30 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:10.051 21:26:30 -- setup/common.sh@46 -- # (( part++ )) 00:04:10.051 21:26:30 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:10.051 21:26:30 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:10.051 21:26:30 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:10.051 21:26:30 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:11.430 Creating new GPT entries in memory. 00:04:11.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:11.430 other utilities. 00:04:11.430 21:26:31 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:11.430 21:26:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:11.430 21:26:31 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:11.430 21:26:31 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:11.430 21:26:31 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:12.365 Creating new GPT entries in memory. 00:04:12.365 The operation has completed successfully. 00:04:12.365 21:26:32 -- setup/common.sh@57 -- # (( part++ )) 00:04:12.365 21:26:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:12.365 21:26:32 -- setup/common.sh@62 -- # wait 55300 00:04:12.365 21:26:32 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.365 21:26:32 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:12.365 21:26:32 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.365 21:26:32 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:12.365 21:26:32 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:12.365 21:26:32 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.365 21:26:32 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.365 21:26:32 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:12.365 21:26:32 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:12.365 21:26:32 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:12.365 21:26:32 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:12.365 21:26:32 -- setup/devices.sh@53 -- # local found=0 00:04:12.365 21:26:32 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:12.365 21:26:32 -- setup/devices.sh@56 -- # : 00:04:12.365 21:26:32 -- setup/devices.sh@59 -- # local pci status 00:04:12.365 21:26:32 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:12.365 21:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.365 21:26:32 -- setup/devices.sh@47 -- # setup output config 00:04:12.365 21:26:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.365 21:26:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:12.365 21:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:12.365 21:26:32 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:12.365 21:26:32 -- setup/devices.sh@63 -- # found=1 00:04:12.365 21:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.365 21:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:12.365 21:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:12.624 21:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:12.624 21:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.192 21:26:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.192 21:26:33 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:13.192 21:26:33 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.192 21:26:33 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.192 21:26:33 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.192 21:26:33 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:13.192 21:26:33 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.192 21:26:33 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.192 21:26:33 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.192 21:26:33 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.192 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.192 21:26:33 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.192 21:26:33 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.451 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.451 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:13.451 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.451 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.451 21:26:33 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:13.451 21:26:33 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:13.451 21:26:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.452 21:26:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:13.452 21:26:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:13.452 21:26:33 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.452 21:26:33 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.452 21:26:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:13.452 21:26:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:13.452 21:26:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:13.452 21:26:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:13.452 21:26:33 -- setup/devices.sh@53 -- # local found=0 00:04:13.452 21:26:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.452 21:26:33 -- setup/devices.sh@56 -- # : 00:04:13.452 21:26:33 -- setup/devices.sh@59 -- # local pci status 00:04:13.452 21:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.452 21:26:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:13.452 21:26:33 -- setup/devices.sh@47 -- # setup output config 00:04:13.452 21:26:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.452 21:26:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:13.711 21:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.711 21:26:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.711 21:26:34 -- setup/devices.sh@63 -- # found=1 00:04:13.711 21:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.711 21:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.711 21:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.711 21:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:13.711 21:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.278 21:26:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:14.278 21:26:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:14.278 21:26:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.278 21:26:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:14.278 21:26:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:14.278 21:26:34 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:14.278 21:26:34 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:14.278 21:26:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:14.278 21:26:34 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:14.278 21:26:34 -- setup/devices.sh@50 -- # local mount_point= 00:04:14.278 21:26:34 -- setup/devices.sh@51 -- # local test_file= 00:04:14.278 21:26:34 -- setup/devices.sh@53 -- # local found=0 00:04:14.278 21:26:34 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:14.278 21:26:34 -- setup/devices.sh@59 -- # local pci status 00:04:14.278 21:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.278 21:26:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:14.278 21:26:34 -- setup/devices.sh@47 -- # setup output config 00:04:14.278 21:26:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.278 21:26:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.536 21:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.536 21:26:35 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:14.536 21:26:35 -- setup/devices.sh@63 -- # found=1 00:04:14.536 21:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.536 21:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.536 21:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:14.795 21:26:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:14.795 21:26:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:15.361 21:26:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:15.361 21:26:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:15.361 21:26:35 -- setup/devices.sh@68 -- # return 0 00:04:15.361 21:26:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:15.361 21:26:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:15.361 21:26:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:15.361 21:26:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:15.361 21:26:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:15.361 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:15.361 00:04:15.361 real 0m5.233s 00:04:15.361 user 0m0.527s 00:04:15.361 sys 0m2.476s 00:04:15.361 21:26:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:15.361 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:15.361 ************************************ 00:04:15.361 END TEST nvme_mount 00:04:15.361 ************************************ 00:04:15.361 21:26:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:15.361 21:26:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.361 21:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.361 21:26:35 -- common/autotest_common.sh@10 -- # set +x 00:04:15.361 ************************************ 00:04:15.361 START TEST dm_mount 00:04:15.361 ************************************ 00:04:15.361 21:26:35 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:15.361 21:26:35 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:15.361 21:26:35 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:15.361 21:26:35 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:15.361 21:26:35 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:15.361 21:26:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.361 21:26:35 -- setup/common.sh@40 -- # local part_no=2 00:04:15.361 21:26:35 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.361 21:26:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.361 21:26:35 -- setup/common.sh@44 -- # parts=() 00:04:15.361 21:26:35 -- setup/common.sh@44 -- # local parts 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.361 21:26:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.361 21:26:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.361 21:26:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.361 21:26:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:15.362 21:26:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.362 21:26:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:16.738 Creating new GPT entries in memory. 00:04:16.738 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:16.738 other utilities. 00:04:16.738 21:26:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:16.738 21:26:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:16.738 21:26:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:16.738 21:26:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:16.738 21:26:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:17.675 Creating new GPT entries in memory. 00:04:17.675 The operation has completed successfully. 00:04:17.675 21:26:37 -- setup/common.sh@57 -- # (( part++ )) 00:04:17.675 21:26:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.675 21:26:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:17.675 21:26:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:17.675 21:26:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:18.614 The operation has completed successfully. 00:04:18.614 21:26:38 -- setup/common.sh@57 -- # (( part++ )) 00:04:18.614 21:26:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.614 21:26:38 -- setup/common.sh@62 -- # wait 55733 00:04:18.614 21:26:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:18.614 21:26:38 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.614 21:26:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.614 21:26:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:18.614 21:26:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:18.614 21:26:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.614 21:26:38 -- setup/devices.sh@161 -- # break 00:04:18.614 21:26:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.614 21:26:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:18.614 21:26:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:18.614 21:26:38 -- setup/devices.sh@166 -- # dm=dm-0 00:04:18.614 21:26:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:18.614 21:26:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:18.614 21:26:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.614 21:26:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:18.614 21:26:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.614 21:26:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:18.614 21:26:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:18.614 21:26:38 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.614 21:26:38 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.614 21:26:38 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:18.614 21:26:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:18.614 21:26:38 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:18.614 21:26:38 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:18.614 21:26:38 -- setup/devices.sh@53 -- # local found=0 00:04:18.614 21:26:38 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:18.614 21:26:38 -- setup/devices.sh@56 -- # : 00:04:18.614 21:26:38 -- setup/devices.sh@59 -- # local pci status 00:04:18.614 21:26:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.614 21:26:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:18.614 21:26:38 -- setup/devices.sh@47 -- # setup output config 00:04:18.614 21:26:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.614 21:26:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.883 21:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.883 21:26:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:18.883 21:26:39 -- setup/devices.sh@63 -- # found=1 00:04:18.883 21:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.883 21:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.883 21:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.883 21:26:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:18.883 21:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.451 21:26:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:19.451 21:26:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:19.451 21:26:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.451 21:26:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:19.451 21:26:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:19.451 21:26:39 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:19.451 21:26:39 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:19.451 21:26:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:19.451 21:26:39 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:19.451 21:26:39 -- setup/devices.sh@50 -- # local mount_point= 00:04:19.451 21:26:39 -- setup/devices.sh@51 -- # local test_file= 00:04:19.451 21:26:39 -- setup/devices.sh@53 -- # local found=0 00:04:19.451 21:26:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:19.451 21:26:39 -- setup/devices.sh@59 -- # local pci status 00:04:19.451 21:26:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.451 21:26:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:19.451 21:26:39 -- setup/devices.sh@47 -- # setup output config 00:04:19.451 21:26:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.451 21:26:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.708 21:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.708 21:26:40 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:19.708 21:26:40 -- setup/devices.sh@63 -- # found=1 00:04:19.708 21:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.708 21:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.708 21:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.967 21:26:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:19.967 21:26:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.536 21:26:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.536 21:26:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:20.536 21:26:40 -- setup/devices.sh@68 -- # return 0 00:04:20.536 21:26:40 -- setup/devices.sh@187 -- # cleanup_dm 00:04:20.536 21:26:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.536 21:26:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.536 21:26:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:20.536 21:26:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.536 21:26:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:20.536 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.536 21:26:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.536 21:26:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:20.536 00:04:20.536 real 0m5.055s 00:04:20.536 user 0m0.312s 00:04:20.536 sys 0m1.685s 00:04:20.536 21:26:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.536 21:26:40 -- common/autotest_common.sh@10 -- # set +x 00:04:20.536 ************************************ 00:04:20.536 END TEST dm_mount 00:04:20.536 ************************************ 00:04:20.536 21:26:40 -- setup/devices.sh@1 -- # cleanup 00:04:20.536 21:26:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:20.536 21:26:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:20.536 21:26:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.536 21:26:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.536 21:26:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.536 21:26:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.794 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.794 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:20.794 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.794 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.794 21:26:41 -- setup/devices.sh@12 -- # cleanup_dm 00:04:20.794 21:26:41 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:20.794 21:26:41 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:20.794 21:26:41 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.794 21:26:41 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:20.794 21:26:41 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.794 21:26:41 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:20.794 00:04:20.794 real 0m11.373s 00:04:20.794 user 0m1.216s 00:04:20.794 sys 0m4.640s 00:04:20.794 21:26:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.794 21:26:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.794 ************************************ 00:04:20.794 END TEST devices 00:04:20.794 ************************************ 00:04:20.794 ************************************ 00:04:20.794 END TEST setup.sh 00:04:20.794 ************************************ 00:04:20.794 00:04:20.794 real 0m23.737s 00:04:20.794 user 0m5.156s 00:04:20.794 sys 0m13.523s 00:04:20.794 21:26:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.794 21:26:41 -- common/autotest_common.sh@10 -- # set +x 00:04:20.794 21:26:41 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.053 Hugepages 00:04:21.053 node hugesize free / total 00:04:21.053 node0 1048576kB 0 / 0 00:04:21.053 node0 2048kB 2048 / 2048 00:04:21.053 00:04:21.053 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.053 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:21.313 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.313 21:26:41 -- spdk/autotest.sh@128 -- # uname -s 00:04:21.313 21:26:41 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:21.313 21:26:41 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:21.313 21:26:41 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.572 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:21.831 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.398 21:26:42 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:23.333 21:26:43 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:23.333 21:26:43 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:23.333 21:26:43 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.333 21:26:43 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:23.333 21:26:43 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:23.333 21:26:43 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:23.333 21:26:43 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.333 21:26:43 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.333 21:26:43 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:23.333 21:26:43 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:23.333 21:26:43 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:04:23.333 21:26:43 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.592 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:23.592 Waiting for block devices as requested 00:04:23.852 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:23.852 21:26:44 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:23.852 21:26:44 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:23.852 21:26:44 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:23.852 21:26:44 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:23.852 21:26:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:23.852 21:26:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:23.852 21:26:44 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:23.852 21:26:44 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:23.852 21:26:44 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:23.852 21:26:44 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:23.852 21:26:44 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:23.852 21:26:44 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:23.852 21:26:44 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:23.852 21:26:44 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:23.852 21:26:44 -- common/autotest_common.sh@1552 -- # continue 00:04:23.852 21:26:44 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:23.852 21:26:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:23.852 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.852 21:26:44 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:23.852 21:26:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:23.852 21:26:44 -- common/autotest_common.sh@10 -- # set +x 00:04:23.852 21:26:44 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:04:24.419 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.002 21:26:45 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:25.002 21:26:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:25.002 21:26:45 -- common/autotest_common.sh@10 -- # set +x 00:04:25.002 21:26:45 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:25.002 21:26:45 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:25.002 21:26:45 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.002 21:26:45 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:25.002 21:26:45 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:25.002 21:26:45 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:25.002 21:26:45 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:25.002 21:26:45 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:25.002 21:26:45 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.002 21:26:45 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.002 21:26:45 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:25.002 21:26:45 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:04:25.002 21:26:45 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:04:25.002 21:26:45 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:25.002 21:26:45 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:25.002 21:26:45 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:25.002 21:26:45 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.002 21:26:45 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:25.002 21:26:45 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:25.002 21:26:45 -- common/autotest_common.sh@1588 -- # return 0 00:04:25.002 21:26:45 -- spdk/autotest.sh@148 -- # '[' 1 -eq 1 ']' 00:04:25.002 21:26:45 -- spdk/autotest.sh@149 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:25.002 21:26:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.002 21:26:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.002 21:26:45 -- common/autotest_common.sh@10 -- # set +x 00:04:25.002 ************************************ 00:04:25.002 START TEST unittest 00:04:25.002 ************************************ 00:04:25.002 21:26:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:25.002 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:25.002 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:25.002 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:25.002 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:25.002 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:25.002 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:25.002 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:25.002 ++ rpc_py=rpc_cmd 00:04:25.002 ++ set -e 00:04:25.002 ++ shopt -s nullglob 00:04:25.002 ++ shopt -s extglob 00:04:25.002 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:25.002 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:25.002 +++ CONFIG_WPDK_DIR= 00:04:25.002 +++ CONFIG_ASAN=y 00:04:25.002 +++ CONFIG_VBDEV_COMPRESS=n 00:04:25.002 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:25.002 +++ CONFIG_USDT=n 00:04:25.002 +++ CONFIG_CUSTOMOCF=n 00:04:25.002 +++ CONFIG_PREFIX=/usr/local 00:04:25.002 +++ CONFIG_RBD=n 00:04:25.002 +++ CONFIG_LIBDIR= 00:04:25.002 +++ CONFIG_IDXD=y 00:04:25.002 +++ CONFIG_NVME_CUSE=y 00:04:25.002 +++ CONFIG_SMA=n 00:04:25.002 +++ CONFIG_VTUNE=n 00:04:25.002 +++ CONFIG_TSAN=n 00:04:25.002 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:25.002 +++ CONFIG_VFIO_USER_DIR= 00:04:25.002 +++ CONFIG_PGO_CAPTURE=n 00:04:25.002 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:25.002 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:25.002 +++ CONFIG_LTO=n 00:04:25.002 +++ CONFIG_ISCSI_INITIATOR=y 00:04:25.002 +++ CONFIG_CET=n 00:04:25.002 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:25.002 +++ CONFIG_OCF_PATH= 00:04:25.002 +++ CONFIG_RDMA_SET_TOS=y 00:04:25.002 +++ CONFIG_HAVE_ARC4RANDOM=y 00:04:25.002 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:25.002 +++ CONFIG_UBLK=y 00:04:25.002 +++ CONFIG_ISAL_CRYPTO=y 00:04:25.002 +++ CONFIG_OPENSSL_PATH= 00:04:25.002 +++ CONFIG_OCF=n 00:04:25.002 +++ CONFIG_FUSE=n 00:04:25.002 +++ CONFIG_VTUNE_DIR= 00:04:25.002 +++ CONFIG_FUZZER_LIB= 00:04:25.002 +++ CONFIG_FUZZER=n 00:04:25.002 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:25.002 +++ CONFIG_CRYPTO=n 00:04:25.002 +++ CONFIG_PGO_USE=n 00:04:25.002 +++ CONFIG_VHOST=y 00:04:25.002 +++ CONFIG_DAOS=n 00:04:25.002 +++ CONFIG_DPDK_INC_DIR= 00:04:25.002 +++ CONFIG_DAOS_DIR= 00:04:25.002 +++ CONFIG_UNIT_TESTS=y 00:04:25.002 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:25.002 +++ CONFIG_VIRTIO=y 00:04:25.002 +++ CONFIG_COVERAGE=y 00:04:25.002 +++ CONFIG_RDMA=y 00:04:25.002 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:25.002 +++ CONFIG_URING_PATH= 00:04:25.002 +++ CONFIG_XNVME=n 00:04:25.002 +++ CONFIG_VFIO_USER=n 00:04:25.002 +++ CONFIG_ARCH=native 00:04:25.002 +++ CONFIG_URING_ZNS=n 00:04:25.002 +++ CONFIG_WERROR=y 00:04:25.002 +++ CONFIG_HAVE_LIBBSD=n 00:04:25.002 +++ CONFIG_UBSAN=y 00:04:25.002 +++ CONFIG_IPSEC_MB_DIR= 00:04:25.002 +++ CONFIG_GOLANG=n 00:04:25.002 +++ CONFIG_ISAL=y 00:04:25.002 +++ CONFIG_IDXD_KERNEL=y 00:04:25.002 +++ CONFIG_DPDK_LIB_DIR= 00:04:25.002 +++ CONFIG_RDMA_PROV=verbs 00:04:25.002 +++ CONFIG_APPS=y 00:04:25.002 +++ CONFIG_SHARED=n 00:04:25.002 +++ CONFIG_FC_PATH= 00:04:25.002 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:25.002 +++ CONFIG_FC=n 00:04:25.002 +++ CONFIG_AVAHI=n 00:04:25.002 +++ CONFIG_FIO_PLUGIN=y 00:04:25.002 +++ CONFIG_RAID5F=y 00:04:25.002 +++ CONFIG_EXAMPLES=y 00:04:25.002 +++ CONFIG_TESTS=y 00:04:25.002 +++ CONFIG_CRYPTO_MLX5=n 00:04:25.002 +++ CONFIG_MAX_LCORES= 00:04:25.002 +++ CONFIG_IPSEC_MB=n 00:04:25.002 +++ CONFIG_DEBUG=y 00:04:25.002 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:25.002 +++ CONFIG_CROSS_PREFIX= 00:04:25.002 +++ CONFIG_URING=n 00:04:25.002 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:25.002 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:25.002 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:25.002 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:25.003 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:25.003 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:25.003 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:25.003 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:25.003 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:25.003 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:25.003 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:25.003 +++ VHOST_APP=("$_app_dir/vhost") 00:04:25.003 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:25.003 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:25.003 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:25.003 +++ [[ #ifndef SPDK_CONFIG_H 00:04:25.003 #define SPDK_CONFIG_H 00:04:25.003 #define SPDK_CONFIG_APPS 1 00:04:25.003 #define SPDK_CONFIG_ARCH native 00:04:25.003 #define SPDK_CONFIG_ASAN 1 00:04:25.003 #undef SPDK_CONFIG_AVAHI 00:04:25.003 #undef SPDK_CONFIG_CET 00:04:25.003 #define SPDK_CONFIG_COVERAGE 1 00:04:25.003 #define SPDK_CONFIG_CROSS_PREFIX 00:04:25.003 #undef SPDK_CONFIG_CRYPTO 00:04:25.003 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:25.003 #undef SPDK_CONFIG_CUSTOMOCF 00:04:25.003 #undef SPDK_CONFIG_DAOS 00:04:25.003 #define SPDK_CONFIG_DAOS_DIR 00:04:25.003 #define SPDK_CONFIG_DEBUG 1 00:04:25.003 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:25.003 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:25.003 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:25.003 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:25.003 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:25.003 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:25.003 #define SPDK_CONFIG_EXAMPLES 1 00:04:25.003 #undef SPDK_CONFIG_FC 00:04:25.003 #define SPDK_CONFIG_FC_PATH 00:04:25.003 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:25.003 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:25.003 #undef SPDK_CONFIG_FUSE 00:04:25.003 #undef SPDK_CONFIG_FUZZER 00:04:25.003 #define SPDK_CONFIG_FUZZER_LIB 00:04:25.003 #undef SPDK_CONFIG_GOLANG 00:04:25.003 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:04:25.003 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:25.003 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:25.003 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:25.003 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:25.003 #define SPDK_CONFIG_IDXD 1 00:04:25.003 #define SPDK_CONFIG_IDXD_KERNEL 1 00:04:25.003 #undef SPDK_CONFIG_IPSEC_MB 00:04:25.003 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:25.003 #define SPDK_CONFIG_ISAL 1 00:04:25.003 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:25.003 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:25.003 #define SPDK_CONFIG_LIBDIR 00:04:25.003 #undef SPDK_CONFIG_LTO 00:04:25.003 #define SPDK_CONFIG_MAX_LCORES 00:04:25.003 #define SPDK_CONFIG_NVME_CUSE 1 00:04:25.003 #undef SPDK_CONFIG_OCF 00:04:25.003 #define SPDK_CONFIG_OCF_PATH 00:04:25.003 #define SPDK_CONFIG_OPENSSL_PATH 00:04:25.003 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:25.003 #undef SPDK_CONFIG_PGO_USE 00:04:25.003 #define SPDK_CONFIG_PREFIX /usr/local 00:04:25.003 #define SPDK_CONFIG_RAID5F 1 00:04:25.003 #undef SPDK_CONFIG_RBD 00:04:25.003 #define SPDK_CONFIG_RDMA 1 00:04:25.003 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:25.003 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:25.003 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:25.003 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:25.003 #undef SPDK_CONFIG_SHARED 00:04:25.003 #undef SPDK_CONFIG_SMA 00:04:25.003 #define SPDK_CONFIG_TESTS 1 00:04:25.003 #undef SPDK_CONFIG_TSAN 00:04:25.003 #define SPDK_CONFIG_UBLK 1 00:04:25.003 #define SPDK_CONFIG_UBSAN 1 00:04:25.003 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:25.003 #undef SPDK_CONFIG_URING 00:04:25.003 #define SPDK_CONFIG_URING_PATH 00:04:25.003 #undef SPDK_CONFIG_URING_ZNS 00:04:25.003 #undef SPDK_CONFIG_USDT 00:04:25.003 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:25.003 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:25.003 #undef SPDK_CONFIG_VFIO_USER 00:04:25.003 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:25.003 #define SPDK_CONFIG_VHOST 1 00:04:25.003 #define SPDK_CONFIG_VIRTIO 1 00:04:25.003 #undef SPDK_CONFIG_VTUNE 00:04:25.003 #define SPDK_CONFIG_VTUNE_DIR 00:04:25.003 #define SPDK_CONFIG_WERROR 1 00:04:25.003 #define SPDK_CONFIG_WPDK_DIR 00:04:25.003 #undef SPDK_CONFIG_XNVME 00:04:25.003 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:25.003 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:25.003 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.003 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:25.003 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.003 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.003 ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:25.003 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:25.003 ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:25.003 ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:25.003 ++++ export PATH 00:04:25.003 ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:04:25.003 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:25.003 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:25.003 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:25.003 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:25.003 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:25.003 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:25.003 +++ TEST_TAG=N/A 00:04:25.003 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:25.003 ++ : 1 00:04:25.003 ++ export RUN_NIGHTLY 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_RUN_VALGRIND 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_TEST_UNITTEST 00:04:25.003 ++ : 00:04:25.003 ++ export SPDK_TEST_AUTOBUILD 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_RELEASE_BUILD 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_ISAL 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_ISCSI 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_TEST_NVME 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVME_PMR 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVME_BP 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVME_CLI 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVME_CUSE 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVME_FDP 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_NVMF 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VFIOUSER 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_FUZZER 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_FUZZER_SHORT 00:04:25.003 ++ : rdma 00:04:25.003 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_RBD 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VHOST 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_TEST_BLOCKDEV 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_IOAT 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_BLOBFS 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VHOST_INIT 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_LVOL 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_RUN_ASAN 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_RUN_UBSAN 00:04:25.003 ++ : 00:04:25.003 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_RUN_NON_ROOT 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_CRYPTO 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_FTL 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_OCF 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_VMD 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_OPAL 00:04:25.003 ++ : 00:04:25.003 ++ export SPDK_TEST_NATIVE_DPDK 00:04:25.003 ++ : true 00:04:25.003 ++ export SPDK_AUTOTEST_X 00:04:25.003 ++ : 1 00:04:25.003 ++ export SPDK_TEST_RAID5 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_URING 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_USDT 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_USE_IGB_UIO 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_SCHEDULER 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_SCANBUILD 00:04:25.003 ++ : 00:04:25.003 ++ export SPDK_TEST_NVMF_NICS 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_SMA 00:04:25.003 ++ : 0 00:04:25.003 ++ export SPDK_TEST_DAOS 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_TEST_XNVME 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_TEST_ACCEL_DSA 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_TEST_ACCEL_IAA 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_TEST_ACCEL_IOAT 00:04:25.004 ++ : 00:04:25.004 ++ export SPDK_TEST_FUZZER_TARGET 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_TEST_NVMF_MDNS 00:04:25.004 ++ : 0 00:04:25.004 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:25.004 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:25.004 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:25.004 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:25.004 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:25.004 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:25.004 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:25.004 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:25.004 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:25.004 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:25.004 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:25.004 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:25.004 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:25.004 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:25.004 ++ PYTHONDONTWRITEBYTECODE=1 00:04:25.004 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:25.004 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:25.004 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:25.004 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:25.004 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:25.004 ++ rm -rf /var/tmp/asan_suppression_file 00:04:25.004 ++ cat 00:04:25.004 ++ echo leak:libfuse3.so 00:04:25.004 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:25.004 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:25.004 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:25.004 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:25.004 ++ '[' -z /var/spdk/dependencies ']' 00:04:25.004 ++ export DEPENDENCY_DIR 00:04:25.004 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:25.004 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:25.004 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:25.004 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:25.004 ++ export QEMU_BIN= 00:04:25.004 ++ QEMU_BIN= 00:04:25.004 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:25.004 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:25.004 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:25.004 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:25.004 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:25.004 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:25.004 ++ _LCOV_MAIN=0 00:04:25.004 ++ _LCOV_LLVM=1 00:04:25.004 ++ _LCOV= 00:04:25.004 ++ [[ '' == *clang* ]] 00:04:25.004 ++ [[ 0 -eq 1 ]] 00:04:25.004 ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:04:25.004 ++ _lcov_opt[_LCOV_MAIN]= 00:04:25.004 ++ lcov_opt= 00:04:25.004 ++ '[' 0 -eq 0 ']' 00:04:25.004 ++ export valgrind= 00:04:25.004 ++ valgrind= 00:04:25.004 +++ uname -s 00:04:25.004 ++ '[' Linux = Linux ']' 00:04:25.004 ++ HUGEMEM=4096 00:04:25.004 ++ export CLEAR_HUGE=yes 00:04:25.004 ++ CLEAR_HUGE=yes 00:04:25.004 ++ [[ 0 -eq 1 ]] 00:04:25.004 ++ [[ 0 -eq 1 ]] 00:04:25.004 ++ MAKE=make 00:04:25.004 +++ nproc 00:04:25.004 ++ MAKEFLAGS=-j10 00:04:25.004 ++ export HUGEMEM=4096 00:04:25.004 ++ HUGEMEM=4096 00:04:25.004 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:25.004 ++ NO_HUGE=() 00:04:25.004 ++ TEST_MODE= 00:04:25.004 ++ [[ -z '' ]] 00:04:25.004 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:25.004 ++ exec 00:04:25.004 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:25.004 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:25.004 ++ set_test_storage 2147483648 00:04:25.004 ++ [[ -v testdir ]] 00:04:25.004 ++ local requested_size=2147483648 00:04:25.004 ++ local mount target_dir 00:04:25.004 ++ local -A mounts fss sizes avails uses 00:04:25.004 ++ local source fs size avail mount use 00:04:25.004 ++ local storage_fallback storage_candidates 00:04:25.004 +++ mktemp -udt spdk.XXXXXX 00:04:25.004 ++ storage_fallback=/tmp/spdk.br6DoM 00:04:25.004 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:25.004 ++ [[ -n '' ]] 00:04:25.004 ++ [[ -n '' ]] 00:04:25.004 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.br6DoM/tests/unit /tmp/spdk.br6DoM 00:04:25.004 ++ requested_size=2214592512 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 +++ df -T 00:04:25.004 +++ grep -v Filesystem 00:04:25.004 ++ mounts["$mount"]=tmpfs 00:04:25.004 ++ fss["$mount"]=tmpfs 00:04:25.004 ++ avails["$mount"]=1252958208 00:04:25.004 ++ sizes["$mount"]=1254027264 00:04:25.004 ++ uses["$mount"]=1069056 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=/dev/vda1 00:04:25.004 ++ fss["$mount"]=ext4 00:04:25.004 ++ avails["$mount"]=10283966464 00:04:25.004 ++ sizes["$mount"]=19681529856 00:04:25.004 ++ uses["$mount"]=9380786176 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=tmpfs 00:04:25.004 ++ fss["$mount"]=tmpfs 00:04:25.004 ++ avails["$mount"]=6270115840 00:04:25.004 ++ sizes["$mount"]=6270115840 00:04:25.004 ++ uses["$mount"]=0 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=tmpfs 00:04:25.004 ++ fss["$mount"]=tmpfs 00:04:25.004 ++ avails["$mount"]=5242880 00:04:25.004 ++ sizes["$mount"]=5242880 00:04:25.004 ++ uses["$mount"]=0 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=/dev/vda16 00:04:25.004 ++ fss["$mount"]=ext4 00:04:25.004 ++ avails["$mount"]=777306112 00:04:25.004 ++ sizes["$mount"]=923156480 00:04:25.004 ++ uses["$mount"]=81207296 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=/dev/vda15 00:04:25.004 ++ fss["$mount"]=vfat 00:04:25.004 ++ avails["$mount"]=103000064 00:04:25.004 ++ sizes["$mount"]=109395968 00:04:25.004 ++ uses["$mount"]=6395904 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=tmpfs 00:04:25.004 ++ fss["$mount"]=tmpfs 00:04:25.004 ++ avails["$mount"]=1254010880 00:04:25.004 ++ sizes["$mount"]=1254023168 00:04:25.004 ++ uses["$mount"]=12288 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:04:25.004 ++ fss["$mount"]=fuse.sshfs 00:04:25.004 ++ avails["$mount"]=98015240192 00:04:25.004 ++ sizes["$mount"]=105088212992 00:04:25.004 ++ uses["$mount"]=1687539712 00:04:25.004 ++ read -r source fs size use avail _ mount 00:04:25.004 ++ printf '* Looking for test storage...\n' 00:04:25.004 * Looking for test storage... 00:04:25.004 ++ local target_space new_size 00:04:25.004 ++ for target_dir in "${storage_candidates[@]}" 00:04:25.004 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:25.004 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:25.004 ++ mount=/ 00:04:25.004 ++ target_space=10283966464 00:04:25.004 ++ (( target_space == 0 || target_space < requested_size )) 00:04:25.004 ++ (( target_space >= requested_size )) 00:04:25.004 ++ [[ ext4 == tmpfs ]] 00:04:25.004 ++ [[ ext4 == ramfs ]] 00:04:25.004 ++ [[ / == / ]] 00:04:25.004 ++ new_size=11595378688 00:04:25.004 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:25.004 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:25.004 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:25.004 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:25.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:25.004 ++ return 0 00:04:25.004 ++ set -o errtrace 00:04:25.004 ++ shopt -s extdebug 00:04:25.004 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:25.004 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:25.004 21:26:45 -- common/autotest_common.sh@1682 -- # true 00:04:25.004 21:26:45 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:04:25.004 21:26:45 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:25.004 21:26:45 -- common/autotest_common.sh@29 -- # exec 00:04:25.004 21:26:45 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:25.004 21:26:45 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:25.004 21:26:45 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:25.004 21:26:45 -- common/autotest_common.sh@18 -- # set -x 00:04:25.004 21:26:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:25.262 21:26:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:25.262 21:26:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:25.262 21:26:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:25.262 21:26:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:25.262 21:26:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:25.262 21:26:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:25.262 21:26:45 -- scripts/common.sh@335 -- # IFS=.-: 00:04:25.262 21:26:45 -- scripts/common.sh@335 -- # read -ra ver1 00:04:25.262 21:26:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.262 21:26:45 -- scripts/common.sh@336 -- # read -ra ver2 00:04:25.262 21:26:45 -- scripts/common.sh@337 -- # local 'op=<' 00:04:25.262 21:26:45 -- scripts/common.sh@339 -- # ver1_l=2 00:04:25.262 21:26:45 -- scripts/common.sh@340 -- # ver2_l=1 00:04:25.262 21:26:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:25.262 21:26:45 -- scripts/common.sh@343 -- # case "$op" in 00:04:25.262 21:26:45 -- scripts/common.sh@344 -- # : 1 00:04:25.263 21:26:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:25.263 21:26:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.263 21:26:45 -- scripts/common.sh@364 -- # decimal 1 00:04:25.263 21:26:45 -- scripts/common.sh@352 -- # local d=1 00:04:25.263 21:26:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.263 21:26:45 -- scripts/common.sh@354 -- # echo 1 00:04:25.263 21:26:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:25.263 21:26:45 -- scripts/common.sh@365 -- # decimal 2 00:04:25.263 21:26:45 -- scripts/common.sh@352 -- # local d=2 00:04:25.263 21:26:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.263 21:26:45 -- scripts/common.sh@354 -- # echo 2 00:04:25.263 21:26:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:25.263 21:26:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:25.263 21:26:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:25.263 21:26:45 -- scripts/common.sh@367 -- # return 0 00:04:25.263 21:26:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.263 21:26:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.263 --rc genhtml_branch_coverage=1 00:04:25.263 --rc genhtml_function_coverage=1 00:04:25.263 --rc genhtml_legend=1 00:04:25.263 --rc geninfo_all_blocks=1 00:04:25.263 --rc geninfo_unexecuted_blocks=1 00:04:25.263 00:04:25.263 ' 00:04:25.263 21:26:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.263 --rc genhtml_branch_coverage=1 00:04:25.263 --rc genhtml_function_coverage=1 00:04:25.263 --rc genhtml_legend=1 00:04:25.263 --rc geninfo_all_blocks=1 00:04:25.263 --rc geninfo_unexecuted_blocks=1 00:04:25.263 00:04:25.263 ' 00:04:25.263 21:26:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.263 --rc genhtml_branch_coverage=1 00:04:25.263 --rc genhtml_function_coverage=1 00:04:25.263 --rc genhtml_legend=1 00:04:25.263 --rc geninfo_all_blocks=1 00:04:25.263 --rc geninfo_unexecuted_blocks=1 00:04:25.263 00:04:25.263 ' 00:04:25.263 21:26:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:25.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.263 --rc genhtml_branch_coverage=1 00:04:25.263 --rc genhtml_function_coverage=1 00:04:25.263 --rc genhtml_legend=1 00:04:25.263 --rc geninfo_all_blocks=1 00:04:25.263 --rc geninfo_unexecuted_blocks=1 00:04:25.263 00:04:25.263 ' 00:04:25.263 21:26:45 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:25.263 21:26:45 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:25.263 21:26:45 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:25.263 21:26:45 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:25.263 21:26:45 -- unit/unittest.sh@174 -- # [[ y == y ]] 00:04:25.263 21:26:45 -- unit/unittest.sh@175 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:25.263 21:26:45 -- unit/unittest.sh@176 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:25.263 21:26:45 -- unit/unittest.sh@178 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:40.138 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:40.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:40.138 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:40.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:40.138 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:40.138 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:18.896 21:27:38 -- unit/unittest.sh@182 -- # uname -m 00:05:18.896 21:27:38 -- unit/unittest.sh@182 -- # '[' x86_64 = aarch64 ']' 00:05:18.896 21:27:38 -- unit/unittest.sh@186 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:18.896 21:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.896 21:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.896 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.896 ************************************ 00:05:18.896 START TEST unittest_pci_event 00:05:18.896 ************************************ 00:05:18.896 21:27:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:18.896 00:05:18.896 00:05:18.896 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.896 http://cunit.sourceforge.net/ 00:05:18.896 00:05:18.896 00:05:18.896 Suite: pci_event 00:05:18.896 Test: test_pci_parse_event ...passed 00:05:18.896 00:05:18.896 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.896 suites 1 1 n/a 0 0 00:05:18.896 tests 1 1 1 0 0 00:05:18.896 asserts 15 15 15 0 n/a 00:05:18.896 00:05:18.896 Elapsed time = 0.001 seconds 00:05:18.896 [2024-12-06 21:27:38.653952] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:18.896 [2024-12-06 21:27:38.654371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:18.896 00:05:18.896 real 0m0.041s 00:05:18.896 user 0m0.017s 00:05:18.896 sys 0m0.019s 00:05:18.896 21:27:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.896 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.896 ************************************ 00:05:18.896 END TEST unittest_pci_event 00:05:18.896 ************************************ 00:05:18.896 21:27:38 -- unit/unittest.sh@187 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:18.896 21:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.896 21:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.896 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.896 ************************************ 00:05:18.896 START TEST unittest_include 00:05:18.896 ************************************ 00:05:18.896 21:27:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:18.896 00:05:18.896 00:05:18.896 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.896 http://cunit.sourceforge.net/ 00:05:18.896 00:05:18.896 00:05:18.896 Suite: histogram 00:05:18.896 Test: histogram_test ...passed 00:05:18.896 Test: histogram_merge ...passed 00:05:18.896 00:05:18.896 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.896 suites 1 1 n/a 0 0 00:05:18.896 tests 2 2 2 0 0 00:05:18.896 asserts 50 50 50 0 n/a 00:05:18.896 00:05:18.896 Elapsed time = 0.006 seconds 00:05:18.896 00:05:18.896 real 0m0.036s 00:05:18.896 user 0m0.020s 00:05:18.896 sys 0m0.017s 00:05:18.896 21:27:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:18.896 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.896 ************************************ 00:05:18.896 END TEST unittest_include 00:05:18.896 ************************************ 00:05:18.896 21:27:38 -- unit/unittest.sh@188 -- # run_test unittest_bdev unittest_bdev 00:05:18.896 21:27:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.896 21:27:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.896 21:27:38 -- common/autotest_common.sh@10 -- # set +x 00:05:18.896 ************************************ 00:05:18.896 START TEST unittest_bdev 00:05:18.897 ************************************ 00:05:18.897 21:27:38 -- common/autotest_common.sh@1114 -- # unittest_bdev 00:05:18.897 21:27:38 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:18.897 00:05:18.897 00:05:18.897 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.897 http://cunit.sourceforge.net/ 00:05:18.897 00:05:18.897 00:05:18.897 Suite: bdev 00:05:18.897 Test: bytes_to_blocks_test ...passed 00:05:18.897 Test: num_blocks_test ...passed 00:05:18.897 Test: io_valid_test ...passed 00:05:18.897 Test: open_write_test ...[2024-12-06 21:27:38.876182] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:18.897 [2024-12-06 21:27:38.876489] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:18.897 [2024-12-06 21:27:38.876635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:18.897 passed 00:05:18.897 Test: claim_test ...passed 00:05:18.897 Test: alias_add_del_test ...[2024-12-06 21:27:38.936145] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:18.897 [2024-12-06 21:27:38.936242] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4583:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:18.897 [2024-12-06 21:27:38.936298] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:18.897 passed 00:05:18.897 Test: get_device_stat_test ...passed 00:05:18.897 Test: bdev_io_types_test ...passed 00:05:18.897 Test: bdev_io_wait_test ...passed 00:05:18.897 Test: bdev_io_spans_split_test ...passed 00:05:18.897 Test: bdev_io_boundary_split_test ...passed 00:05:18.897 Test: bdev_io_max_size_and_segment_split_test ...[2024-12-06 21:27:39.053054] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:18.897 passed 00:05:18.897 Test: bdev_io_mix_split_test ...passed 00:05:18.897 Test: bdev_io_split_with_io_wait ...passed 00:05:18.897 Test: bdev_io_write_unit_split_test ...[2024-12-06 21:27:39.124925] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:18.897 [2024-12-06 21:27:39.125023] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:18.897 [2024-12-06 21:27:39.125051] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:18.897 [2024-12-06 21:27:39.125104] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:18.897 passed 00:05:18.897 Test: bdev_io_alignment_with_boundary ...passed 00:05:18.897 Test: bdev_io_alignment ...passed 00:05:18.897 Test: bdev_histograms ...passed 00:05:18.897 Test: bdev_write_zeroes ...passed 00:05:18.897 Test: bdev_compare_and_write ...passed 00:05:18.897 Test: bdev_compare ...passed 00:05:18.897 Test: bdev_compare_emulated ...passed 00:05:18.897 Test: bdev_zcopy_write ...passed 00:05:18.897 Test: bdev_zcopy_read ...passed 00:05:18.897 Test: bdev_open_while_hotremove ...passed 00:05:18.897 Test: bdev_close_while_hotremove ...passed 00:05:18.897 Test: bdev_open_ext_test ...passed 00:05:18.897 Test: bdev_open_ext_unregister ...[2024-12-06 21:27:39.385338] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:18.897 passed 00:05:18.897 Test: bdev_set_io_timeout ...[2024-12-06 21:27:39.385512] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8046:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:19.156 passed 00:05:19.156 Test: bdev_set_qd_sampling ...passed 00:05:19.156 Test: lba_range_overlap ...passed 00:05:19.156 Test: lock_lba_range_check_ranges ...passed 00:05:19.156 Test: lock_lba_range_with_io_outstanding ...passed 00:05:19.156 Test: lock_lba_range_overlapped ...passed 00:05:19.156 Test: bdev_quiesce ...[2024-12-06 21:27:39.497763] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9969:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:19.156 passed 00:05:19.156 Test: bdev_io_abort ...passed 00:05:19.156 Test: bdev_unmap ...passed 00:05:19.156 Test: bdev_write_zeroes_split_test ...passed 00:05:19.156 Test: bdev_set_options_test ...passed 00:05:19.156 Test: bdev_get_memory_domains ...passed 00:05:19.156 Test: bdev_io_ext ...[2024-12-06 21:27:39.579557] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:19.156 passed 00:05:19.156 Test: bdev_io_ext_no_opts ...passed 00:05:19.156 Test: bdev_io_ext_invalid_opts ...passed 00:05:19.417 Test: bdev_io_ext_split ...passed 00:05:19.417 Test: bdev_io_ext_bounce_buffer ...passed 00:05:19.417 Test: bdev_register_uuid_alias ...[2024-12-06 21:27:39.697912] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 3700e20f-065e-415d-a24a-3b37b833eb83 already exists 00:05:19.417 [2024-12-06 21:27:39.698024] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:3700e20f-065e-415d-a24a-3b37b833eb83 alias for bdev bdev0 00:05:19.417 passed 00:05:19.417 Test: bdev_unregister_by_name ...[2024-12-06 21:27:39.718494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7836:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:19.417 passed 00:05:19.417 Test: for_each_bdev_test ...[2024-12-06 21:27:39.718554] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7844:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:19.417 passed 00:05:19.417 Test: bdev_seek_test ...passed 00:05:19.417 Test: bdev_copy ...passed 00:05:19.417 Test: bdev_copy_split_test ...passed 00:05:19.417 Test: examine_locks ...passed 00:05:19.417 Test: claim_v2_rwo ...[2024-12-06 21:27:39.784082] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.417 [2024-12-06 21:27:39.784200] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8570:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.417 [2024-12-06 21:27:39.784228] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.417 [2024-12-06 21:27:39.784245] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.417 passed 00:05:19.418 Test: claim_v2_rom ...[2024-12-06 21:27:39.784281] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784327] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:19.418 [2024-12-06 21:27:39.784522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784563] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.418 passed 00:05:19.418 Test: claim_v2_rwm ...[2024-12-06 21:27:39.784580] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784593] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8608:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:19.418 [2024-12-06 21:27:39.784648] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.418 [2024-12-06 21:27:39.784741] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:19.418 [2024-12-06 21:27:39.784771] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7940:bdev_open: *ERROR*: bdev bdev0 already claimed: tpassed 00:05:19.418 Test: claim_v2_existing_writer ...ype read_many_write_many by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784801] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784815] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784830] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784842] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8658:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.784877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8638:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:19.418 passed 00:05:19.418 Test: claim_v2_existing_v1 ...passed 00:05:19.418 Test: claim_v1_existing_v2 ...[2024-12-06 21:27:39.784993] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.418 [2024-12-06 21:27:39.785015] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:19.418 [2024-12-06 21:27:39.785115] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.785142] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.785155] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.785250] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:19.418 [2024-12-06 21:27:39.785285] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:19.418 passed 00:05:19.418 Test: examine_claimed ...[2024-12-06 21:27:39.785313] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:19.418 passed 00:05:19.418 00:05:19.418 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.418 suites 1 1 n/a 0 0 00:05:19.418 tests 59 59 59 0 0 00:05:19.418 asserts 4599 4599 4599 0 n/a 00:05:19.418 00:05:19.418 Elapsed time = 0.948 seconds 00:05:19.418 [2024-12-06 21:27:39.785575] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8735:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:19.418 21:27:39 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:19.418 00:05:19.418 00:05:19.418 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.418 http://cunit.sourceforge.net/ 00:05:19.418 00:05:19.418 00:05:19.418 Suite: nvme 00:05:19.418 Test: test_create_ctrlr ...passed 00:05:19.418 Test: test_reset_ctrlr ...[2024-12-06 21:27:39.828958] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 passed 00:05:19.418 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:19.418 Test: test_failover_ctrlr ...passed 00:05:19.418 Test: test_race_between_failover_and_add_secondary_trid ...[2024-12-06 21:27:39.831165] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 [2024-12-06 21:27:39.831398] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 [2024-12-06 21:27:39.831589] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 passed 00:05:19.418 Test: test_pending_reset ...[2024-12-06 21:27:39.832881] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 [2024-12-06 21:27:39.833111] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 passed 00:05:19.418 Test: test_attach_ctrlr ...[2024-12-06 21:27:39.834097] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:19.418 passed 00:05:19.418 Test: test_aer_cb ...passed 00:05:19.418 Test: test_submit_nvme_cmd ...passed 00:05:19.418 Test: test_add_remove_trid ...passed 00:05:19.418 Test: test_abort ...[2024-12-06 21:27:39.836815] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7227:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:19.418 passed 00:05:19.418 Test: test_get_io_qpair ...passed 00:05:19.418 Test: test_bdev_unregister ...passed 00:05:19.418 Test: test_compare_ns ...passed 00:05:19.418 Test: test_init_ana_log_page ...passed 00:05:19.418 Test: test_get_memory_domains ...passed 00:05:19.418 Test: test_reconnect_qpair ...[2024-12-06 21:27:39.839123] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.418 passed 00:05:19.418 Test: test_create_bdev_ctrlr ...[2024-12-06 21:27:39.839575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5279:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:19.418 passed 00:05:19.418 Test: test_add_multi_ns_to_bdev ...[2024-12-06 21:27:39.840632] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4492:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:19.418 passed 00:05:19.418 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:19.418 Test: test_admin_path ...passed 00:05:19.418 Test: test_reset_bdev_ctrlr ...passed 00:05:19.418 Test: test_find_io_path ...passed 00:05:19.418 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:19.418 Test: test_retry_io_for_io_path_error ...passed 00:05:19.418 Test: test_retry_io_count ...passed 00:05:19.418 Test: test_concurrent_read_ana_log_page ...passed 00:05:19.418 Test: test_retry_io_for_ana_error ...passed 00:05:19.418 Test: test_check_io_error_resiliency_params ...[2024-12-06 21:27:39.846367] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5932:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:19.419 [2024-12-06 21:27:39.846442] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5936:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:19.419 [2024-12-06 21:27:39.846480] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5945:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:19.419 [2024-12-06 21:27:39.846495] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5948:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:19.419 [2024-12-06 21:27:39.846505] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:19.419 [2024-12-06 21:27:39.846523] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5960:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:19.419 [2024-12-06 21:27:39.846536] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5940:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:19.419 [2024-12-06 21:27:39.846571] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5955:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:19.419 passed 00:05:19.419 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-12-06 21:27:39.846594] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5952:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:19.419 passed 00:05:19.419 Test: test_reconnect_ctrlr ...[2024-12-06 21:27:39.847253] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.847394] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.847653] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.847758] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.847856] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 passed 00:05:19.419 Test: test_retry_failover_ctrlr ...[2024-12-06 21:27:39.848245] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 passed 00:05:19.419 Test: test_fail_path ...[2024-12-06 21:27:39.848790] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.848964] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.849091] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.849177] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.849279] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 passed 00:05:19.419 Test: test_nvme_ns_cmp ...passed 00:05:19.419 Test: test_ana_transition ...passed 00:05:19.419 Test: test_set_preferred_path ...passed 00:05:19.419 Test: test_find_next_io_path ...passed 00:05:19.419 Test: test_find_io_path_min_qd ...passed 00:05:19.419 Test: test_disable_auto_failback ...[2024-12-06 21:27:39.850752] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 passed 00:05:19.419 Test: test_set_multipath_policy ...passed 00:05:19.419 Test: test_uuid_generation ...passed 00:05:19.419 Test: test_retry_io_to_same_path ...passed 00:05:19.419 Test: test_race_between_reset_and_disconnected ...passed 00:05:19.419 Test: test_ctrlr_op_rpc ...passed 00:05:19.419 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:19.419 Test: test_disable_enable_ctrlr ...[2024-12-06 21:27:39.854194] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 [2024-12-06 21:27:39.854360] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:19.419 passed 00:05:19.419 Test: test_delete_ctrlr_done ...passed 00:05:19.419 Test: test_ns_remove_during_reset ...passed 00:05:19.419 00:05:19.419 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.419 suites 1 1 n/a 0 0 00:05:19.419 tests 48 48 48 0 0 00:05:19.419 asserts 3553 3553 3553 0 n/a 00:05:19.419 00:05:19.419 Elapsed time = 0.027 seconds 00:05:19.419 21:27:39 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:19.419 Test Options 00:05:19.419 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:19.419 00:05:19.419 00:05:19.419 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.419 http://cunit.sourceforge.net/ 00:05:19.419 00:05:19.419 00:05:19.419 Suite: raid 00:05:19.419 Test: test_create_raid ...passed 00:05:19.419 Test: test_create_raid_superblock ...passed 00:05:19.419 Test: test_delete_raid ...passed 00:05:19.419 Test: test_create_raid_invalid_args ...[2024-12-06 21:27:39.903533] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:19.419 [2024-12-06 21:27:39.903890] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:19.419 [2024-12-06 21:27:39.904547] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:19.419 [2024-12-06 21:27:39.904782] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:19.419 [2024-12-06 21:27:39.905613] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:19.419 passed 00:05:19.419 Test: test_delete_raid_invalid_args ...passed 00:05:19.419 Test: test_io_channel ...passed 00:05:19.419 Test: test_reset_io ...passed 00:05:19.419 Test: test_write_io ...passed 00:05:19.419 Test: test_read_io ...passed 00:05:19.985 Test: test_unmap_io ...passed 00:05:19.985 Test: test_io_failure ...[2024-12-06 21:27:40.436546] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:19.985 passed 00:05:19.985 Test: test_multi_raid_no_io ...passed 00:05:19.985 Test: test_multi_raid_with_io ...passed 00:05:19.985 Test: test_io_type_supported ...passed 00:05:19.985 Test: test_raid_json_dump_info ...passed 00:05:19.985 Test: test_context_size ...passed 00:05:19.985 Test: test_raid_level_conversions ...passed 00:05:19.985 Test: test_raid_process ...passed 00:05:19.985 Test: test_raid_io_split ...passed 00:05:19.985 00:05:19.985 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.985 suites 1 1 n/a 0 0 00:05:19.985 tests 19 19 19 0 0 00:05:19.985 asserts 177879 177879 177879 0 n/a 00:05:19.985 00:05:19.985 Elapsed time = 0.544 seconds 00:05:19.985 21:27:40 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:20.245 00:05:20.245 00:05:20.245 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.245 http://cunit.sourceforge.net/ 00:05:20.245 00:05:20.245 00:05:20.245 Suite: raid_sb 00:05:20.245 Test: test_raid_bdev_write_superblock ...passed 00:05:20.245 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:20.245 Test: test_raid_bdev_parse_superblock ...[2024-12-06 21:27:40.486342] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:20.245 passed 00:05:20.245 00:05:20.245 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.245 suites 1 1 n/a 0 0 00:05:20.245 tests 3 3 3 0 0 00:05:20.245 asserts 32 32 32 0 n/a 00:05:20.245 00:05:20.245 Elapsed time = 0.001 seconds 00:05:20.245 21:27:40 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:20.245 00:05:20.245 00:05:20.245 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.245 http://cunit.sourceforge.net/ 00:05:20.245 00:05:20.245 00:05:20.245 Suite: concat 00:05:20.245 Test: test_concat_start ...passed 00:05:20.245 Test: test_concat_rw ...passed 00:05:20.245 Test: test_concat_null_payload ...passed 00:05:20.245 00:05:20.245 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.245 suites 1 1 n/a 0 0 00:05:20.245 tests 3 3 3 0 0 00:05:20.245 asserts 8097 8097 8097 0 n/a 00:05:20.245 00:05:20.245 Elapsed time = 0.007 seconds 00:05:20.245 21:27:40 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:20.245 00:05:20.245 00:05:20.245 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.245 http://cunit.sourceforge.net/ 00:05:20.245 00:05:20.245 00:05:20.245 Suite: raid1 00:05:20.245 Test: test_raid1_start ...passed 00:05:20.245 Test: test_raid1_read_balancing ...passed 00:05:20.245 00:05:20.245 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.245 suites 1 1 n/a 0 0 00:05:20.245 tests 2 2 2 0 0 00:05:20.245 asserts 2856 2856 2856 0 n/a 00:05:20.245 00:05:20.245 Elapsed time = 0.004 seconds 00:05:20.245 21:27:40 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:20.245 00:05:20.245 00:05:20.245 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.245 http://cunit.sourceforge.net/ 00:05:20.245 00:05:20.245 00:05:20.245 Suite: zone 00:05:20.245 Test: test_zone_get_operation ...passed 00:05:20.245 Test: test_bdev_zone_get_info ...passed 00:05:20.245 Test: test_bdev_zone_management ...passed 00:05:20.245 Test: test_bdev_zone_append ...passed 00:05:20.245 Test: test_bdev_zone_append_with_md ...passed 00:05:20.245 Test: test_bdev_zone_appendv ...passed 00:05:20.245 Test: test_bdev_zone_appendv_with_md ...passed 00:05:20.245 Test: test_bdev_io_get_append_location ...passed 00:05:20.245 00:05:20.245 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.245 suites 1 1 n/a 0 0 00:05:20.245 tests 8 8 8 0 0 00:05:20.245 asserts 94 94 94 0 n/a 00:05:20.245 00:05:20.245 Elapsed time = 0.001 seconds 00:05:20.245 21:27:40 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:20.245 00:05:20.245 00:05:20.245 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.245 http://cunit.sourceforge.net/ 00:05:20.245 00:05:20.245 00:05:20.245 Suite: gpt_parse 00:05:20.245 Test: test_parse_mbr_and_primary ...[2024-12-06 21:27:40.623604] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:20.245 [2024-12-06 21:27:40.623870] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:20.245 [2024-12-06 21:27:40.624004] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:20.245 [2024-12-06 21:27:40.624056] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:20.245 [2024-12-06 21:27:40.624113] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:20.246 [2024-12-06 21:27:40.624172] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:20.246 passed 00:05:20.246 Test: test_parse_secondary ...[2024-12-06 21:27:40.625036] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:20.246 [2024-12-06 21:27:40.625079] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:20.246 [2024-12-06 21:27:40.625124] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:20.246 [2024-12-06 21:27:40.625153] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:20.246 passed 00:05:20.246 Test: test_check_mbr ...[2024-12-06 21:27:40.625985] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:20.246 [2024-12-06 21:27:40.626033] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the relatedpassed 00:05:20.246 Test: test_read_header ... buffer should not be NULL 00:05:20.246 [2024-12-06 21:27:40.626159] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:20.246 [2024-12-06 21:27:40.626214] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:20.246 [2024-12-06 21:27:40.626272] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:20.246 [2024-12-06 21:27:40.626318] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:20.246 [2024-12-06 21:27:40.626378] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:20.246 passed 00:05:20.246 Test: test_read_partitions ...[2024-12-06 21:27:40.626420] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:20.246 [2024-12-06 21:27:40.626563] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:20.246 [2024-12-06 21:27:40.626615] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:20.246 [2024-12-06 21:27:40.626660] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:20.246 [2024-12-06 21:27:40.626684] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:20.246 [2024-12-06 21:27:40.627101] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:20.246 passed 00:05:20.246 00:05:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.246 suites 1 1 n/a 0 0 00:05:20.246 tests 5 5 5 0 0 00:05:20.246 asserts 33 33 33 0 n/a 00:05:20.246 00:05:20.246 Elapsed time = 0.004 seconds 00:05:20.246 21:27:40 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:20.246 00:05:20.246 00:05:20.246 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.246 http://cunit.sourceforge.net/ 00:05:20.246 00:05:20.246 00:05:20.246 Suite: bdev_part 00:05:20.246 Test: part_test ...[2024-12-06 21:27:40.661580] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4553:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:20.246 passed 00:05:20.246 Test: part_free_test ...passed 00:05:20.246 Test: part_get_io_channel_test ...passed 00:05:20.246 Test: part_construct_ext ...passed 00:05:20.246 00:05:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.246 suites 1 1 n/a 0 0 00:05:20.246 tests 4 4 4 0 0 00:05:20.246 asserts 48 48 48 0 n/a 00:05:20.246 00:05:20.246 Elapsed time = 0.040 seconds 00:05:20.246 21:27:40 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:20.246 00:05:20.246 00:05:20.246 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.246 http://cunit.sourceforge.net/ 00:05:20.246 00:05:20.246 00:05:20.246 Suite: scsi_nvme_suite 00:05:20.246 Test: scsi_nvme_translate_test ...passed 00:05:20.246 00:05:20.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.246 suites 1 1 n/a 0 0 00:05:20.246 tests 1 1 1 0 0 00:05:20.246 asserts 104 104 104 0 n/a 00:05:20.246 00:05:20.246 Elapsed time = 0.000 seconds 00:05:20.508 21:27:40 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:20.509 00:05:20.509 00:05:20.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.509 http://cunit.sourceforge.net/ 00:05:20.509 00:05:20.509 00:05:20.509 Suite: lvol 00:05:20.509 Test: ut_lvs_init ...[2024-12-06 21:27:40.769349] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:20.509 [2024-12-06 21:27:40.769687] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:20.509 passed 00:05:20.509 Test: ut_lvol_init ...passed 00:05:20.509 Test: ut_lvol_snapshot ...passed 00:05:20.509 Test: ut_lvol_clone ...passed 00:05:20.509 Test: ut_lvs_destroy ...passed 00:05:20.509 Test: ut_lvs_unload ...passed 00:05:20.509 Test: ut_lvol_resize ...[2024-12-06 21:27:40.770984] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:20.509 passed 00:05:20.509 Test: ut_lvol_set_read_only ...passed 00:05:20.509 Test: ut_lvol_hotremove ...passed 00:05:20.509 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:20.509 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:20.509 Test: ut_lvol_read_write ...passed 00:05:20.509 Test: ut_vbdev_lvol_submit_request ...passed 00:05:20.509 Test: ut_lvol_examine_config ...passed 00:05:20.509 Test: ut_lvol_examine_disk ...[2024-12-06 21:27:40.771454] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:20.509 passed 00:05:20.509 Test: ut_lvol_rename ...[2024-12-06 21:27:40.772298] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:20.509 passed 00:05:20.509 Test: ut_bdev_finish ...[2024-12-06 21:27:40.772346] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:20.509 passed 00:05:20.509 Test: ut_lvs_rename ...passed 00:05:20.509 Test: ut_lvol_seek ...passed 00:05:20.509 Test: ut_esnap_dev_create ...[2024-12-06 21:27:40.772884] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:20.509 [2024-12-06 21:27:40.772941] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:20.509 [2024-12-06 21:27:40.772969] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:20.509 [2024-12-06 21:27:40.773017] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:20.509 passed 00:05:20.509 Test: ut_lvol_esnap_clone_bad_args ...[2024-12-06 21:27:40.773111] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:20.509 [2024-12-06 21:27:40.773133] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:20.509 passed 00:05:20.509 00:05:20.509 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.509 suites 1 1 n/a 0 0 00:05:20.509 tests 21 21 21 0 0 00:05:20.509 asserts 712 712 712 0 n/a 00:05:20.509 00:05:20.509 Elapsed time = 0.004 seconds 00:05:20.509 21:27:40 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:20.509 00:05:20.509 00:05:20.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.509 http://cunit.sourceforge.net/ 00:05:20.509 00:05:20.509 00:05:20.509 Suite: zone_block 00:05:20.509 Test: test_zone_block_create ...passed 00:05:20.509 Test: test_zone_block_create_invalid ...[2024-12-06 21:27:40.827897] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:20.509 [2024-12-06 21:27:40.828125] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-06 21:27:40.828280] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:20.509 [2024-12-06 21:27:40.828315] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-06 21:27:40.828486] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:20.509 [2024-12-06 21:27:40.828530] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-12-06 21:27:40.828621] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:20.509 [2024-12-06 21:27:40.828647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:20.509 Test: test_get_zone_info ...[2024-12-06 21:27:40.829265] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.829355] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 passed 00:05:20.509 Test: test_supported_io_types ...[2024-12-06 21:27:40.829416] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 passed 00:05:20.509 Test: test_reset_zone ...[2024-12-06 21:27:40.830250] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.830327] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_requepassed 00:05:20.509 Test: test_open_zone ...st: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.830759] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.831602] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.831684] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 passed 00:05:20.509 Test: test_zone_write ...[2024-12-06 21:27:40.832091] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:20.509 [2024-12-06 21:27:40.832136] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.832228] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:20.509 [2024-12-06 21:27:40.832257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.837867] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:20.509 [2024-12-06 21:27:40.837929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.838017] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:20.509 [2024-12-06 21:27:40.838039] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 [2024-12-06 21:27:40.843518] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:20.509 [2024-12-06 21:27:40.843569] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.509 passed 00:05:20.509 Test: test_zone_read ...[2024-12-06 21:27:40.843986] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:20.509 [2024-12-06 21:27:40.844029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.844093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:20.510 [2024-12-06 21:27:40.844126] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.844611] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:20.510 [2024-12-06 21:27:40.844673] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 passed 00:05:20.510 Test: test_close_zone ...[2024-12-06 21:27:40.845039] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.845128] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.845346] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.845410] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 passed 00:05:20.510 Test: test_finish_zone ...[2024-12-06 21:27:40.846011] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.846074] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 passed 00:05:20.510 Test: test_append_zone ...[2024-12-06 21:27:40.846502] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:20.510 [2024-12-06 21:27:40.846549] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.846579] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:20.510 [2024-12-06 21:27:40.846603] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 [2024-12-06 21:27:40.859705] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:20.510 [2024-12-06 21:27:40.859773] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:20.510 passed 00:05:20.510 00:05:20.510 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.510 suites 1 1 n/a 0 0 00:05:20.510 tests 11 11 11 0 0 00:05:20.510 asserts 3437 3437 3437 0 n/a 00:05:20.510 00:05:20.510 Elapsed time = 0.033 seconds 00:05:20.510 21:27:40 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:20.510 00:05:20.510 00:05:20.510 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.510 http://cunit.sourceforge.net/ 00:05:20.510 00:05:20.510 00:05:20.510 Suite: bdev 00:05:20.510 Test: basic ...[2024-12-06 21:27:40.946771] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x62b4f11d6ec1): Operation not permitted (rc=-1) 00:05:20.510 [2024-12-06 21:27:40.947082] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x62b4f11d6e80): Operation not permitted (rc=-1) 00:05:20.510 [2024-12-06 21:27:40.947136] thread.c:2361:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x62b4f11d6ec1): Operation not permitted (rc=-1) 00:05:20.510 passed 00:05:20.773 Test: unregister_and_close ...passed 00:05:20.773 Test: unregister_and_close_different_threads ...passed 00:05:20.773 Test: basic_qos ...passed 00:05:20.773 Test: put_channel_during_reset ...passed 00:05:20.773 Test: aborted_reset ...passed 00:05:20.773 Test: aborted_reset_no_outstanding_io ...passed 00:05:20.773 Test: io_during_reset ...passed 00:05:20.773 Test: reset_completions ...passed 00:05:21.031 Test: io_during_qos_queue ...passed 00:05:21.031 Test: io_during_qos_reset ...passed 00:05:21.031 Test: enomem ...passed 00:05:21.031 Test: enomem_multi_bdev ...passed 00:05:21.031 Test: enomem_multi_bdev_unregister ...passed 00:05:21.031 Test: enomem_multi_io_target ...passed 00:05:21.031 Test: qos_dynamic_enable ...passed 00:05:21.031 Test: bdev_histograms_mt ...passed 00:05:21.031 Test: bdev_set_io_timeout_mt ...[2024-12-06 21:27:41.495361] thread.c: 467:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered 00:05:21.031 passed 00:05:21.031 Test: lock_lba_range_then_submit_io ...[2024-12-06 21:27:41.503475] thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x62b4f11d6e40 already registered (old:0x5130000003c0 new:0x513000000c80) 00:05:21.031 passed 00:05:21.290 Test: unregister_during_reset ...passed 00:05:21.290 Test: event_notify_and_close ...passed 00:05:21.290 Test: unregister_and_qos_poller ...passed 00:05:21.290 Suite: bdev_wrong_thread 00:05:21.290 Test: spdk_bdev_register_wt ...[2024-12-06 21:27:41.600233] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8364:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x518000001480 (0x518000001480) 00:05:21.290 passed 00:05:21.290 Test: spdk_bdev_examine_wt ...[2024-12-06 21:27:41.600818] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x518000001480 (0x518000001480) 00:05:21.290 passed 00:05:21.290 00:05:21.290 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.290 suites 2 2 n/a 0 0 00:05:21.290 tests 24 24 24 0 0 00:05:21.290 asserts 621 621 621 0 n/a 00:05:21.290 00:05:21.290 Elapsed time = 0.668 seconds 00:05:21.290 00:05:21.290 real 0m2.813s 00:05:21.290 user 0m1.298s 00:05:21.290 sys 0m1.516s 00:05:21.290 21:27:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:21.290 21:27:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.290 ************************************ 00:05:21.290 END TEST unittest_bdev 00:05:21.290 ************************************ 00:05:21.290 21:27:41 -- unit/unittest.sh@189 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:21.291 21:27:41 -- unit/unittest.sh@194 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:21.291 21:27:41 -- unit/unittest.sh@199 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:21.291 21:27:41 -- unit/unittest.sh@203 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:21.291 21:27:41 -- unit/unittest.sh@204 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:21.291 21:27:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.291 21:27:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.291 21:27:41 -- common/autotest_common.sh@10 -- # set +x 00:05:21.291 ************************************ 00:05:21.291 START TEST unittest_bdev_raid5f 00:05:21.291 ************************************ 00:05:21.291 21:27:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:21.291 00:05:21.291 00:05:21.291 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.291 http://cunit.sourceforge.net/ 00:05:21.291 00:05:21.291 00:05:21.291 Suite: raid5f 00:05:21.291 Test: test_raid5f_start ...passed 00:05:21.858 Test: test_raid5f_submit_read_request ...passed 00:05:21.858 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:25.144 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:05:40.026 Test: test_raid5f_chunk_write_error ...passed 00:05:48.208 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:05:50.121 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:16.681 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:16.681 00:06:16.681 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.681 suites 1 1 n/a 0 0 00:06:16.681 tests 8 8 8 0 0 00:06:16.681 asserts 351864 351864 351864 0 n/a 00:06:16.681 00:06:16.681 Elapsed time = 52.808 seconds 00:06:16.681 00:06:16.681 real 0m52.911s 00:06:16.681 user 0m50.509s 00:06:16.681 sys 0m2.378s 00:06:16.681 21:28:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:16.681 ************************************ 00:06:16.681 END TEST unittest_bdev_raid5f 00:06:16.681 ************************************ 00:06:16.681 21:28:34 -- common/autotest_common.sh@10 -- # set +x 00:06:16.681 21:28:34 -- unit/unittest.sh@207 -- # run_test unittest_blob_blobfs unittest_blob 00:06:16.681 21:28:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.681 21:28:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.681 21:28:34 -- common/autotest_common.sh@10 -- # set +x 00:06:16.681 ************************************ 00:06:16.681 START TEST unittest_blob_blobfs 00:06:16.681 ************************************ 00:06:16.681 21:28:34 -- common/autotest_common.sh@1114 -- # unittest_blob 00:06:16.681 21:28:34 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:16.681 21:28:34 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:16.681 00:06:16.681 00:06:16.681 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.681 http://cunit.sourceforge.net/ 00:06:16.681 00:06:16.681 00:06:16.681 Suite: blob_nocopy_noextent 00:06:16.681 Test: blob_init ...[2024-12-06 21:28:34.691788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:16.681 passed 00:06:16.681 Test: blob_thin_provision ...passed 00:06:16.681 Test: blob_read_only ...passed 00:06:16.681 Test: bs_load ...[2024-12-06 21:28:34.773335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:16.681 passed 00:06:16.681 Test: bs_load_custom_cluster_size ...passed 00:06:16.681 Test: bs_load_after_failed_grow ...passed 00:06:16.681 Test: bs_cluster_sz ...[2024-12-06 21:28:34.796050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:16.681 [2024-12-06 21:28:34.796542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:16.681 [2024-12-06 21:28:34.796646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:16.681 passed 00:06:16.681 Test: bs_resize_md ...passed 00:06:16.681 Test: bs_destroy ...passed 00:06:16.681 Test: bs_type ...passed 00:06:16.681 Test: bs_super_block ...passed 00:06:16.681 Test: bs_test_recover_cluster_count ...passed 00:06:16.681 Test: bs_grow_live ...passed 00:06:16.681 Test: bs_grow_live_no_space ...passed 00:06:16.681 Test: bs_test_grow ...passed 00:06:16.681 Test: blob_serialize_test ...passed 00:06:16.681 Test: super_block_crc ...passed 00:06:16.681 Test: blob_thin_prov_write_count_io ...passed 00:06:16.681 Test: bs_load_iter_test ...passed 00:06:16.681 Test: blob_relations ...[2024-12-06 21:28:34.922256] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.922348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 [2024-12-06 21:28:34.923336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.923397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 passed 00:06:16.681 Test: blob_relations2 ...[2024-12-06 21:28:34.934915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.935004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 [2024-12-06 21:28:34.935036] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.935050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 [2024-12-06 21:28:34.936824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.936910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 [2024-12-06 21:28:34.937350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.681 [2024-12-06 21:28:34.937389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.681 passed 00:06:16.681 Test: blob_relations3 ...passed 00:06:16.681 Test: blobstore_clean_power_failure ...passed 00:06:16.682 Test: blob_delete_snapshot_power_failure ...[2024-12-06 21:28:35.047436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:16.682 [2024-12-06 21:28:35.057255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:16.682 [2024-12-06 21:28:35.057381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:16.682 [2024-12-06 21:28:35.057407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:35.067433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:16.682 [2024-12-06 21:28:35.067544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:16.682 [2024-12-06 21:28:35.067570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:16.682 [2024-12-06 21:28:35.067593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:35.078053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:16.682 [2024-12-06 21:28:35.078164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:35.088344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:16.682 [2024-12-06 21:28:35.088521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:35.098596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:16.682 [2024-12-06 21:28:35.098702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 passed 00:06:16.682 Test: blob_create_snapshot_power_failure ...[2024-12-06 21:28:35.127119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:16.682 [2024-12-06 21:28:35.143806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:16.682 [2024-12-06 21:28:35.151795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:16.682 passed 00:06:16.682 Test: blob_io_unit ...passed 00:06:16.682 Test: blob_io_unit_compatibility ...passed 00:06:16.682 Test: blob_ext_md_pages ...passed 00:06:16.682 Test: blob_esnap_io_4096_4096 ...passed 00:06:16.682 Test: blob_esnap_io_512_512 ...passed 00:06:16.682 Test: blob_esnap_io_4096_512 ...passed 00:06:16.682 Test: blob_esnap_io_512_4096 ...passed 00:06:16.682 Suite: blob_bs_nocopy_noextent 00:06:16.682 Test: blob_open ...passed 00:06:16.682 Test: blob_create ...[2024-12-06 21:28:35.320758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:16.682 passed 00:06:16.682 Test: blob_create_loop ...passed 00:06:16.682 Test: blob_create_fail ...[2024-12-06 21:28:35.395128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:16.682 passed 00:06:16.682 Test: blob_create_internal ...passed 00:06:16.682 Test: blob_create_zero_extent ...passed 00:06:16.682 Test: blob_snapshot ...passed 00:06:16.682 Test: blob_clone ...passed 00:06:16.682 Test: blob_inflate ...[2024-12-06 21:28:35.510267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:16.682 passed 00:06:16.682 Test: blob_delete ...passed 00:06:16.682 Test: blob_resize_test ...[2024-12-06 21:28:35.550996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:16.682 passed 00:06:16.682 Test: channel_ops ...passed 00:06:16.682 Test: blob_super ...passed 00:06:16.682 Test: blob_rw_verify_iov ...passed 00:06:16.682 Test: blob_unmap ...passed 00:06:16.682 Test: blob_iter ...passed 00:06:16.682 Test: blob_parse_md ...passed 00:06:16.682 Test: bs_load_pending_removal ...passed 00:06:16.682 Test: bs_unload ...[2024-12-06 21:28:35.714498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:16.682 passed 00:06:16.682 Test: bs_usable_clusters ...passed 00:06:16.682 Test: blob_crc ...[2024-12-06 21:28:35.755993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:16.682 [2024-12-06 21:28:35.756176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:16.682 passed 00:06:16.682 Test: blob_flags ...passed 00:06:16.682 Test: bs_version ...passed 00:06:16.682 Test: blob_set_xattrs_test ...[2024-12-06 21:28:35.818946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:16.682 [2024-12-06 21:28:35.819043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:16.682 passed 00:06:16.682 Test: blob_thin_prov_alloc ...passed 00:06:16.682 Test: blob_insert_cluster_msg_test ...passed 00:06:16.682 Test: blob_thin_prov_rw ...passed 00:06:16.682 Test: blob_thin_prov_rle ...passed 00:06:16.682 Test: blob_thin_prov_rw_iov ...passed 00:06:16.682 Test: blob_snapshot_rw ...passed 00:06:16.682 Test: blob_snapshot_rw_iov ...passed 00:06:16.682 Test: blob_inflate_rw ...passed 00:06:16.682 Test: blob_snapshot_freeze_io ...passed 00:06:16.682 Test: blob_operation_split_rw ...passed 00:06:16.682 Test: blob_operation_split_rw_iov ...passed 00:06:16.682 Test: blob_simultaneous_operations ...[2024-12-06 21:28:36.563840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:16.682 [2024-12-06 21:28:36.563922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:36.565034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:16.682 [2024-12-06 21:28:36.565072] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:36.574380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:16.682 [2024-12-06 21:28:36.574428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 [2024-12-06 21:28:36.574555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:16.682 [2024-12-06 21:28:36.574577] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.682 passed 00:06:16.682 Test: blob_persist_test ...passed 00:06:16.682 Test: blob_decouple_snapshot ...passed 00:06:16.682 Test: blob_seek_io_unit ...passed 00:06:16.682 Test: blob_nested_freezes ...passed 00:06:16.682 Suite: blob_blob_nocopy_noextent 00:06:16.682 Test: blob_write ...passed 00:06:16.682 Test: blob_read ...passed 00:06:16.682 Test: blob_rw_verify ...passed 00:06:16.682 Test: blob_rw_verify_iov_nomem ...passed 00:06:16.682 Test: blob_rw_iov_read_only ...passed 00:06:16.682 Test: blob_xattr ...passed 00:06:16.682 Test: blob_dirty_shutdown ...passed 00:06:16.682 Test: blob_is_degraded ...passed 00:06:16.682 Suite: blob_esnap_bs_nocopy_noextent 00:06:16.682 Test: blob_esnap_create ...passed 00:06:16.682 Test: blob_esnap_thread_add_remove ...passed 00:06:16.682 Test: blob_esnap_clone_snapshot ...passed 00:06:16.682 Test: blob_esnap_clone_inflate ...passed 00:06:16.682 Test: blob_esnap_clone_decouple ...passed 00:06:16.682 Test: blob_esnap_clone_reload ...passed 00:06:16.682 Test: blob_esnap_hotplug ...passed 00:06:16.682 Suite: blob_nocopy_extent 00:06:16.682 Test: blob_init ...[2024-12-06 21:28:37.022544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:16.682 passed 00:06:16.682 Test: blob_thin_provision ...passed 00:06:16.682 Test: blob_read_only ...passed 00:06:16.682 Test: bs_load ...[2024-12-06 21:28:37.054333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:16.683 passed 00:06:16.683 Test: bs_load_custom_cluster_size ...passed 00:06:16.683 Test: bs_load_after_failed_grow ...passed 00:06:16.683 Test: bs_cluster_sz ...[2024-12-06 21:28:37.071953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:16.683 [2024-12-06 21:28:37.072245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:16.683 [2024-12-06 21:28:37.072313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:16.683 passed 00:06:16.683 Test: bs_resize_md ...passed 00:06:16.683 Test: bs_destroy ...passed 00:06:16.683 Test: bs_type ...passed 00:06:16.683 Test: bs_super_block ...passed 00:06:16.683 Test: bs_test_recover_cluster_count ...passed 00:06:16.683 Test: bs_grow_live ...passed 00:06:16.683 Test: bs_grow_live_no_space ...passed 00:06:16.683 Test: bs_test_grow ...passed 00:06:16.683 Test: blob_serialize_test ...passed 00:06:16.683 Test: super_block_crc ...passed 00:06:16.683 Test: blob_thin_prov_write_count_io ...passed 00:06:16.683 Test: bs_load_iter_test ...passed 00:06:16.683 Test: blob_relations ...[2024-12-06 21:28:37.173642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.683 [2024-12-06 21:28:37.173770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.683 [2024-12-06 21:28:37.175061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.683 [2024-12-06 21:28:37.175134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.683 passed 00:06:16.941 Test: blob_relations2 ...[2024-12-06 21:28:37.186791] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.941 [2024-12-06 21:28:37.186938] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.941 [2024-12-06 21:28:37.186968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.941 [2024-12-06 21:28:37.186983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.941 [2024-12-06 21:28:37.188671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.941 [2024-12-06 21:28:37.188737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.941 [2024-12-06 21:28:37.189230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:16.941 [2024-12-06 21:28:37.189317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.941 passed 00:06:16.941 Test: blob_relations3 ...passed 00:06:16.941 Test: blobstore_clean_power_failure ...passed 00:06:16.941 Test: blob_delete_snapshot_power_failure ...[2024-12-06 21:28:37.293773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:16.941 [2024-12-06 21:28:37.302120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:16.941 [2024-12-06 21:28:37.310665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:16.941 [2024-12-06 21:28:37.310741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:16.941 [2024-12-06 21:28:37.310769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.941 [2024-12-06 21:28:37.320184] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:16.941 [2024-12-06 21:28:37.320252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:16.941 [2024-12-06 21:28:37.320277] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:16.941 [2024-12-06 21:28:37.320298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.942 [2024-12-06 21:28:37.328879] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:16.942 [2024-12-06 21:28:37.328962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:16.942 [2024-12-06 21:28:37.328986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:16.942 [2024-12-06 21:28:37.329009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.942 [2024-12-06 21:28:37.337600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:16.942 [2024-12-06 21:28:37.337686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.942 [2024-12-06 21:28:37.346237] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:16.942 [2024-12-06 21:28:37.346346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.942 [2024-12-06 21:28:37.354997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:16.942 [2024-12-06 21:28:37.355085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:16.942 passed 00:06:16.942 Test: blob_create_snapshot_power_failure ...[2024-12-06 21:28:37.379960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:16.942 [2024-12-06 21:28:37.388025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:16.942 [2024-12-06 21:28:37.403863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:16.942 [2024-12-06 21:28:37.412328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:16.942 passed 00:06:17.200 Test: blob_io_unit ...passed 00:06:17.200 Test: blob_io_unit_compatibility ...passed 00:06:17.200 Test: blob_ext_md_pages ...passed 00:06:17.200 Test: blob_esnap_io_4096_4096 ...passed 00:06:17.200 Test: blob_esnap_io_512_512 ...passed 00:06:17.200 Test: blob_esnap_io_4096_512 ...passed 00:06:17.200 Test: blob_esnap_io_512_4096 ...passed 00:06:17.200 Suite: blob_bs_nocopy_extent 00:06:17.200 Test: blob_open ...passed 00:06:17.200 Test: blob_create ...[2024-12-06 21:28:37.576604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:17.200 passed 00:06:17.200 Test: blob_create_loop ...passed 00:06:17.200 Test: blob_create_fail ...[2024-12-06 21:28:37.655197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:17.200 passed 00:06:17.200 Test: blob_create_internal ...passed 00:06:17.459 Test: blob_create_zero_extent ...passed 00:06:17.459 Test: blob_snapshot ...passed 00:06:17.459 Test: blob_clone ...passed 00:06:17.459 Test: blob_inflate ...[2024-12-06 21:28:37.778223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:17.459 passed 00:06:17.459 Test: blob_delete ...passed 00:06:17.459 Test: blob_resize_test ...[2024-12-06 21:28:37.821364] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:17.459 passed 00:06:17.459 Test: channel_ops ...passed 00:06:17.459 Test: blob_super ...passed 00:06:17.459 Test: blob_rw_verify_iov ...passed 00:06:17.459 Test: blob_unmap ...passed 00:06:17.459 Test: blob_iter ...passed 00:06:17.716 Test: blob_parse_md ...passed 00:06:17.716 Test: bs_load_pending_removal ...passed 00:06:17.716 Test: bs_unload ...[2024-12-06 21:28:37.996197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:17.716 passed 00:06:17.716 Test: bs_usable_clusters ...passed 00:06:17.716 Test: blob_crc ...[2024-12-06 21:28:38.040502] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:17.716 [2024-12-06 21:28:38.040646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:17.716 passed 00:06:17.716 Test: blob_flags ...passed 00:06:17.716 Test: bs_version ...passed 00:06:17.716 Test: blob_set_xattrs_test ...[2024-12-06 21:28:38.114512] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:17.716 [2024-12-06 21:28:38.114612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:17.716 passed 00:06:17.973 Test: blob_thin_prov_alloc ...passed 00:06:17.973 Test: blob_insert_cluster_msg_test ...passed 00:06:17.973 Test: blob_thin_prov_rw ...passed 00:06:17.973 Test: blob_thin_prov_rle ...passed 00:06:17.973 Test: blob_thin_prov_rw_iov ...passed 00:06:17.973 Test: blob_snapshot_rw ...passed 00:06:17.973 Test: blob_snapshot_rw_iov ...passed 00:06:18.230 Test: blob_inflate_rw ...passed 00:06:18.230 Test: blob_snapshot_freeze_io ...passed 00:06:18.230 Test: blob_operation_split_rw ...passed 00:06:18.488 Test: blob_operation_split_rw_iov ...passed 00:06:18.488 Test: blob_simultaneous_operations ...[2024-12-06 21:28:38.828834] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:18.488 [2024-12-06 21:28:38.828932] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:18.488 [2024-12-06 21:28:38.829952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:18.488 [2024-12-06 21:28:38.829990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:18.488 [2024-12-06 21:28:38.839417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:18.488 [2024-12-06 21:28:38.839503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:18.488 [2024-12-06 21:28:38.839602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:18.488 [2024-12-06 21:28:38.839620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:18.488 passed 00:06:18.488 Test: blob_persist_test ...passed 00:06:18.488 Test: blob_decouple_snapshot ...passed 00:06:18.488 Test: blob_seek_io_unit ...passed 00:06:18.488 Test: blob_nested_freezes ...passed 00:06:18.488 Suite: blob_blob_nocopy_extent 00:06:18.488 Test: blob_write ...passed 00:06:18.746 Test: blob_read ...passed 00:06:18.746 Test: blob_rw_verify ...passed 00:06:18.746 Test: blob_rw_verify_iov_nomem ...passed 00:06:18.746 Test: blob_rw_iov_read_only ...passed 00:06:18.746 Test: blob_xattr ...passed 00:06:18.746 Test: blob_dirty_shutdown ...passed 00:06:18.746 Test: blob_is_degraded ...passed 00:06:18.746 Suite: blob_esnap_bs_nocopy_extent 00:06:18.746 Test: blob_esnap_create ...passed 00:06:18.746 Test: blob_esnap_thread_add_remove ...passed 00:06:18.746 Test: blob_esnap_clone_snapshot ...passed 00:06:18.746 Test: blob_esnap_clone_inflate ...passed 00:06:19.004 Test: blob_esnap_clone_decouple ...passed 00:06:19.004 Test: blob_esnap_clone_reload ...passed 00:06:19.004 Test: blob_esnap_hotplug ...passed 00:06:19.004 Suite: blob_copy_noextent 00:06:19.004 Test: blob_init ...[2024-12-06 21:28:39.296038] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:19.004 passed 00:06:19.004 Test: blob_thin_provision ...passed 00:06:19.004 Test: blob_read_only ...passed 00:06:19.005 Test: bs_load ...[2024-12-06 21:28:39.327407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:19.005 passed 00:06:19.005 Test: bs_load_custom_cluster_size ...passed 00:06:19.005 Test: bs_load_after_failed_grow ...passed 00:06:19.005 Test: bs_cluster_sz ...[2024-12-06 21:28:39.342363] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:19.005 [2024-12-06 21:28:39.342557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:19.005 [2024-12-06 21:28:39.342595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:19.005 passed 00:06:19.005 Test: bs_resize_md ...passed 00:06:19.005 Test: bs_destroy ...passed 00:06:19.005 Test: bs_type ...passed 00:06:19.005 Test: bs_super_block ...passed 00:06:19.005 Test: bs_test_recover_cluster_count ...passed 00:06:19.005 Test: bs_grow_live ...passed 00:06:19.005 Test: bs_grow_live_no_space ...passed 00:06:19.005 Test: bs_test_grow ...passed 00:06:19.005 Test: blob_serialize_test ...passed 00:06:19.005 Test: super_block_crc ...passed 00:06:19.005 Test: blob_thin_prov_write_count_io ...passed 00:06:19.005 Test: bs_load_iter_test ...passed 00:06:19.005 Test: blob_relations ...[2024-12-06 21:28:39.437682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.437786] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 [2024-12-06 21:28:39.438371] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.438406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 passed 00:06:19.005 Test: blob_relations2 ...[2024-12-06 21:28:39.447766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.447869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 [2024-12-06 21:28:39.447893] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.447904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 [2024-12-06 21:28:39.448829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.448897] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 [2024-12-06 21:28:39.449174] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:19.005 [2024-12-06 21:28:39.449198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.005 passed 00:06:19.005 Test: blob_relations3 ...passed 00:06:19.263 Test: blobstore_clean_power_failure ...passed 00:06:19.263 Test: blob_delete_snapshot_power_failure ...[2024-12-06 21:28:39.551677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:19.263 [2024-12-06 21:28:39.559617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:19.263 [2024-12-06 21:28:39.559698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:19.263 [2024-12-06 21:28:39.559720] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.263 [2024-12-06 21:28:39.567593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:19.263 [2024-12-06 21:28:39.567658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:19.263 [2024-12-06 21:28:39.567675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:19.263 [2024-12-06 21:28:39.567693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.263 [2024-12-06 21:28:39.575798] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:19.263 [2024-12-06 21:28:39.575944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.263 [2024-12-06 21:28:39.584057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:19.263 [2024-12-06 21:28:39.584177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.263 [2024-12-06 21:28:39.592294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:19.263 [2024-12-06 21:28:39.592374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:19.263 passed 00:06:19.263 Test: blob_create_snapshot_power_failure ...[2024-12-06 21:28:39.615601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:19.263 [2024-12-06 21:28:39.630274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:19.263 [2024-12-06 21:28:39.638158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:19.263 passed 00:06:19.263 Test: blob_io_unit ...passed 00:06:19.263 Test: blob_io_unit_compatibility ...passed 00:06:19.263 Test: blob_ext_md_pages ...passed 00:06:19.263 Test: blob_esnap_io_4096_4096 ...passed 00:06:19.263 Test: blob_esnap_io_512_512 ...passed 00:06:19.263 Test: blob_esnap_io_4096_512 ...passed 00:06:19.522 Test: blob_esnap_io_512_4096 ...passed 00:06:19.522 Suite: blob_bs_copy_noextent 00:06:19.522 Test: blob_open ...passed 00:06:19.522 Test: blob_create ...[2024-12-06 21:28:39.797492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:19.522 passed 00:06:19.522 Test: blob_create_loop ...passed 00:06:19.522 Test: blob_create_fail ...[2024-12-06 21:28:39.867510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:19.522 passed 00:06:19.522 Test: blob_create_internal ...passed 00:06:19.522 Test: blob_create_zero_extent ...passed 00:06:19.522 Test: blob_snapshot ...passed 00:06:19.522 Test: blob_clone ...passed 00:06:19.522 Test: blob_inflate ...[2024-12-06 21:28:39.973331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:19.522 passed 00:06:19.522 Test: blob_delete ...passed 00:06:19.522 Test: blob_resize_test ...[2024-12-06 21:28:40.014549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:19.781 passed 00:06:19.781 Test: channel_ops ...passed 00:06:19.781 Test: blob_super ...passed 00:06:19.781 Test: blob_rw_verify_iov ...passed 00:06:19.781 Test: blob_unmap ...passed 00:06:19.781 Test: blob_iter ...passed 00:06:19.781 Test: blob_parse_md ...passed 00:06:19.781 Test: bs_load_pending_removal ...passed 00:06:19.781 Test: bs_unload ...[2024-12-06 21:28:40.190615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:19.781 passed 00:06:19.781 Test: bs_usable_clusters ...passed 00:06:19.781 Test: blob_crc ...[2024-12-06 21:28:40.235540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:19.781 [2024-12-06 21:28:40.235650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:19.781 passed 00:06:19.781 Test: blob_flags ...passed 00:06:20.039 Test: bs_version ...passed 00:06:20.039 Test: blob_set_xattrs_test ...[2024-12-06 21:28:40.302691] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:20.039 [2024-12-06 21:28:40.302790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:20.039 passed 00:06:20.039 Test: blob_thin_prov_alloc ...passed 00:06:20.039 Test: blob_insert_cluster_msg_test ...passed 00:06:20.039 Test: blob_thin_prov_rw ...passed 00:06:20.039 Test: blob_thin_prov_rle ...passed 00:06:20.039 Test: blob_thin_prov_rw_iov ...passed 00:06:20.299 Test: blob_snapshot_rw ...passed 00:06:20.299 Test: blob_snapshot_rw_iov ...passed 00:06:20.299 Test: blob_inflate_rw ...passed 00:06:20.299 Test: blob_snapshot_freeze_io ...passed 00:06:20.557 Test: blob_operation_split_rw ...passed 00:06:20.557 Test: blob_operation_split_rw_iov ...passed 00:06:20.557 Test: blob_simultaneous_operations ...[2024-12-06 21:28:41.041053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:20.557 [2024-12-06 21:28:41.041155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.557 [2024-12-06 21:28:41.041580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:20.557 [2024-12-06 21:28:41.041604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.557 [2024-12-06 21:28:41.043767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:20.557 [2024-12-06 21:28:41.043808] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.557 [2024-12-06 21:28:41.043889] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:20.558 [2024-12-06 21:28:41.043906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:20.816 passed 00:06:20.816 Test: blob_persist_test ...passed 00:06:20.816 Test: blob_decouple_snapshot ...passed 00:06:20.816 Test: blob_seek_io_unit ...passed 00:06:20.816 Test: blob_nested_freezes ...passed 00:06:20.816 Suite: blob_blob_copy_noextent 00:06:20.816 Test: blob_write ...passed 00:06:20.816 Test: blob_read ...passed 00:06:20.816 Test: blob_rw_verify ...passed 00:06:20.816 Test: blob_rw_verify_iov_nomem ...passed 00:06:20.816 Test: blob_rw_iov_read_only ...passed 00:06:20.816 Test: blob_xattr ...passed 00:06:20.816 Test: blob_dirty_shutdown ...passed 00:06:21.075 Test: blob_is_degraded ...passed 00:06:21.075 Suite: blob_esnap_bs_copy_noextent 00:06:21.075 Test: blob_esnap_create ...passed 00:06:21.075 Test: blob_esnap_thread_add_remove ...passed 00:06:21.075 Test: blob_esnap_clone_snapshot ...passed 00:06:21.075 Test: blob_esnap_clone_inflate ...passed 00:06:21.075 Test: blob_esnap_clone_decouple ...passed 00:06:21.075 Test: blob_esnap_clone_reload ...passed 00:06:21.075 Test: blob_esnap_hotplug ...passed 00:06:21.075 Suite: blob_copy_extent 00:06:21.075 Test: blob_init ...[2024-12-06 21:28:41.479057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:21.075 passed 00:06:21.075 Test: blob_thin_provision ...passed 00:06:21.075 Test: blob_read_only ...passed 00:06:21.075 Test: bs_load ...[2024-12-06 21:28:41.507771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:21.075 passed 00:06:21.075 Test: bs_load_custom_cluster_size ...passed 00:06:21.075 Test: bs_load_after_failed_grow ...passed 00:06:21.075 Test: bs_cluster_sz ...[2024-12-06 21:28:41.523841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:21.075 [2024-12-06 21:28:41.524019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:21.075 [2024-12-06 21:28:41.524058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:21.075 passed 00:06:21.075 Test: bs_resize_md ...passed 00:06:21.075 Test: bs_destroy ...passed 00:06:21.075 Test: bs_type ...passed 00:06:21.075 Test: bs_super_block ...passed 00:06:21.334 Test: bs_test_recover_cluster_count ...passed 00:06:21.334 Test: bs_grow_live ...passed 00:06:21.334 Test: bs_grow_live_no_space ...passed 00:06:21.334 Test: bs_test_grow ...passed 00:06:21.334 Test: blob_serialize_test ...passed 00:06:21.334 Test: super_block_crc ...passed 00:06:21.334 Test: blob_thin_prov_write_count_io ...passed 00:06:21.334 Test: bs_load_iter_test ...passed 00:06:21.334 Test: blob_relations ...[2024-12-06 21:28:41.627826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.334 [2024-12-06 21:28:41.628146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.334 [2024-12-06 21:28:41.629135] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.335 [2024-12-06 21:28:41.629299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 passed 00:06:21.335 Test: blob_relations2 ...[2024-12-06 21:28:41.639095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.335 [2024-12-06 21:28:41.639161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.639201] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.335 [2024-12-06 21:28:41.639213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.640635] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.335 [2024-12-06 21:28:41.640682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.641035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:21.335 [2024-12-06 21:28:41.641084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 passed 00:06:21.335 Test: blob_relations3 ...passed 00:06:21.335 Test: blobstore_clean_power_failure ...passed 00:06:21.335 Test: blob_delete_snapshot_power_failure ...[2024-12-06 21:28:41.739894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:21.335 [2024-12-06 21:28:41.750824] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:21.335 [2024-12-06 21:28:41.758937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:21.335 [2024-12-06 21:28:41.759018] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:21.335 [2024-12-06 21:28:41.759039] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.767035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:21.335 [2024-12-06 21:28:41.767118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:21.335 [2024-12-06 21:28:41.767137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:21.335 [2024-12-06 21:28:41.767156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.775111] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:21.335 [2024-12-06 21:28:41.775195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:21.335 [2024-12-06 21:28:41.775213] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:21.335 [2024-12-06 21:28:41.775232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.783365] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:21.335 [2024-12-06 21:28:41.783496] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.791547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:21.335 [2024-12-06 21:28:41.791838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 [2024-12-06 21:28:41.799939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:21.335 [2024-12-06 21:28:41.800033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:21.335 passed 00:06:21.335 Test: blob_create_snapshot_power_failure ...[2024-12-06 21:28:41.823573] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:21.594 [2024-12-06 21:28:41.831919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:21.594 [2024-12-06 21:28:41.847649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:21.594 [2024-12-06 21:28:41.855737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:21.594 passed 00:06:21.594 Test: blob_io_unit ...passed 00:06:21.594 Test: blob_io_unit_compatibility ...passed 00:06:21.594 Test: blob_ext_md_pages ...passed 00:06:21.594 Test: blob_esnap_io_4096_4096 ...passed 00:06:21.594 Test: blob_esnap_io_512_512 ...passed 00:06:21.594 Test: blob_esnap_io_4096_512 ...passed 00:06:21.594 Test: blob_esnap_io_512_4096 ...passed 00:06:21.594 Suite: blob_bs_copy_extent 00:06:21.594 Test: blob_open ...passed 00:06:21.594 Test: blob_create ...[2024-12-06 21:28:42.017755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:21.594 passed 00:06:21.594 Test: blob_create_loop ...passed 00:06:21.853 Test: blob_create_fail ...[2024-12-06 21:28:42.091100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:21.853 passed 00:06:21.853 Test: blob_create_internal ...passed 00:06:21.853 Test: blob_create_zero_extent ...passed 00:06:21.853 Test: blob_snapshot ...passed 00:06:21.853 Test: blob_clone ...passed 00:06:21.853 Test: blob_inflate ...[2024-12-06 21:28:42.205102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:21.853 passed 00:06:21.853 Test: blob_delete ...passed 00:06:21.853 Test: blob_resize_test ...[2024-12-06 21:28:42.243094] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:21.853 passed 00:06:21.853 Test: channel_ops ...passed 00:06:21.853 Test: blob_super ...passed 00:06:21.853 Test: blob_rw_verify_iov ...passed 00:06:21.853 Test: blob_unmap ...passed 00:06:22.112 Test: blob_iter ...passed 00:06:22.112 Test: blob_parse_md ...passed 00:06:22.112 Test: bs_load_pending_removal ...passed 00:06:22.112 Test: bs_unload ...[2024-12-06 21:28:42.423082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:22.112 passed 00:06:22.112 Test: bs_usable_clusters ...passed 00:06:22.112 Test: blob_crc ...[2024-12-06 21:28:42.468939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:22.112 [2024-12-06 21:28:42.469056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:22.112 passed 00:06:22.112 Test: blob_flags ...passed 00:06:22.112 Test: bs_version ...passed 00:06:22.112 Test: blob_set_xattrs_test ...[2024-12-06 21:28:42.532706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:22.112 [2024-12-06 21:28:42.533049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:22.112 passed 00:06:22.372 Test: blob_thin_prov_alloc ...passed 00:06:22.372 Test: blob_insert_cluster_msg_test ...passed 00:06:22.372 Test: blob_thin_prov_rw ...passed 00:06:22.372 Test: blob_thin_prov_rle ...passed 00:06:22.372 Test: blob_thin_prov_rw_iov ...passed 00:06:22.372 Test: blob_snapshot_rw ...passed 00:06:22.372 Test: blob_snapshot_rw_iov ...passed 00:06:22.630 Test: blob_inflate_rw ...passed 00:06:22.630 Test: blob_snapshot_freeze_io ...passed 00:06:22.630 Test: blob_operation_split_rw ...passed 00:06:22.910 Test: blob_operation_split_rw_iov ...passed 00:06:22.910 Test: blob_simultaneous_operations ...[2024-12-06 21:28:43.237954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:22.910 [2024-12-06 21:28:43.238266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:22.910 [2024-12-06 21:28:43.238778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:22.910 [2024-12-06 21:28:43.238924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:22.910 [2024-12-06 21:28:43.241202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:22.910 [2024-12-06 21:28:43.241399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:22.910 [2024-12-06 21:28:43.241666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:22.910 [2024-12-06 21:28:43.241894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:22.910 passed 00:06:22.910 Test: blob_persist_test ...passed 00:06:22.910 Test: blob_decouple_snapshot ...passed 00:06:22.910 Test: blob_seek_io_unit ...passed 00:06:22.910 Test: blob_nested_freezes ...passed 00:06:22.910 Suite: blob_blob_copy_extent 00:06:22.910 Test: blob_write ...passed 00:06:22.910 Test: blob_read ...passed 00:06:23.174 Test: blob_rw_verify ...passed 00:06:23.174 Test: blob_rw_verify_iov_nomem ...passed 00:06:23.174 Test: blob_rw_iov_read_only ...passed 00:06:23.174 Test: blob_xattr ...passed 00:06:23.174 Test: blob_dirty_shutdown ...passed 00:06:23.174 Test: blob_is_degraded ...passed 00:06:23.174 Suite: blob_esnap_bs_copy_extent 00:06:23.174 Test: blob_esnap_create ...passed 00:06:23.174 Test: blob_esnap_thread_add_remove ...passed 00:06:23.174 Test: blob_esnap_clone_snapshot ...passed 00:06:23.174 Test: blob_esnap_clone_inflate ...passed 00:06:23.174 Test: blob_esnap_clone_decouple ...passed 00:06:23.433 Test: blob_esnap_clone_reload ...passed 00:06:23.433 Test: blob_esnap_hotplug ...passed 00:06:23.433 00:06:23.433 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.433 suites 16 16 n/a 0 0 00:06:23.433 tests 348 348 348 0 0 00:06:23.433 asserts 92605 92605 92605 0 n/a 00:06:23.433 00:06:23.433 Elapsed time = 9.007 seconds 00:06:23.433 21:28:43 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:23.433 00:06:23.433 00:06:23.433 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.433 http://cunit.sourceforge.net/ 00:06:23.433 00:06:23.433 00:06:23.433 Suite: blob_bdev 00:06:23.433 Test: create_bs_dev ...passed 00:06:23.433 Test: create_bs_dev_ro ...passed 00:06:23.433 Test: create_bs_dev_rw ...passed 00:06:23.433 Test: claim_bs_dev ...passed 00:06:23.433 Test: claim_bs_dev_ro ...passed 00:06:23.433 Test: deferred_destroy_refs ...passed 00:06:23.433 Test: deferred_destroy_channels ...passed 00:06:23.433 Test: deferred_destroy_threads ...passed 00:06:23.433 00:06:23.433 [2024-12-06 21:28:43.823784] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:23.433 [2024-12-06 21:28:43.824202] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:23.433 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.433 suites 1 1 n/a 0 0 00:06:23.433 tests 8 8 8 0 0 00:06:23.433 asserts 119 119 119 0 n/a 00:06:23.433 00:06:23.433 Elapsed time = 0.001 seconds 00:06:23.433 21:28:43 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:23.433 00:06:23.433 00:06:23.433 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.433 http://cunit.sourceforge.net/ 00:06:23.433 00:06:23.433 00:06:23.433 Suite: tree 00:06:23.433 Test: blobfs_tree_op_test ...passed 00:06:23.433 00:06:23.433 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.433 suites 1 1 n/a 0 0 00:06:23.433 tests 1 1 1 0 0 00:06:23.433 asserts 27 27 27 0 n/a 00:06:23.433 00:06:23.433 Elapsed time = 0.000 seconds 00:06:23.433 21:28:43 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:23.433 00:06:23.433 00:06:23.433 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.433 http://cunit.sourceforge.net/ 00:06:23.433 00:06:23.433 00:06:23.433 Suite: blobfs_async_ut 00:06:23.691 Test: fs_init ...passed 00:06:23.691 Test: fs_open ...passed 00:06:23.691 Test: fs_create ...passed 00:06:23.691 Test: fs_truncate ...passed 00:06:23.691 Test: fs_rename ...[2024-12-06 21:28:43.996904] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:23.691 passed 00:06:23.691 Test: fs_rw_async ...passed 00:06:23.691 Test: fs_writev_readv_async ...passed 00:06:23.691 Test: tree_find_buffer_ut ...passed 00:06:23.691 Test: channel_ops ...passed 00:06:23.691 Test: channel_ops_sync ...passed 00:06:23.691 00:06:23.691 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.691 suites 1 1 n/a 0 0 00:06:23.691 tests 10 10 10 0 0 00:06:23.692 asserts 292 292 292 0 n/a 00:06:23.692 00:06:23.692 Elapsed time = 0.143 seconds 00:06:23.692 21:28:44 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:23.692 00:06:23.692 00:06:23.692 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.692 http://cunit.sourceforge.net/ 00:06:23.692 00:06:23.692 00:06:23.692 Suite: blobfs_sync_ut 00:06:23.692 Test: cache_read_after_write ...[2024-12-06 21:28:44.146364] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1476:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:23.692 passed 00:06:23.692 Test: file_length ...passed 00:06:23.692 Test: append_write_to_extend_blob ...passed 00:06:23.692 Test: partial_buffer ...passed 00:06:23.951 Test: cache_write_null_buffer ...passed 00:06:23.951 Test: fs_create_sync ...passed 00:06:23.951 Test: fs_rename_sync ...passed 00:06:23.951 Test: cache_append_no_cache ...passed 00:06:23.951 Test: fs_delete_file_without_close ...passed 00:06:23.951 00:06:23.951 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.951 suites 1 1 n/a 0 0 00:06:23.951 tests 9 9 9 0 0 00:06:23.951 asserts 345 345 345 0 n/a 00:06:23.951 00:06:23.951 Elapsed time = 0.272 seconds 00:06:23.951 21:28:44 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:23.951 00:06:23.951 00:06:23.951 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.951 http://cunit.sourceforge.net/ 00:06:23.951 00:06:23.951 00:06:23.951 Suite: blobfs_bdev_ut 00:06:23.951 Test: spdk_blobfs_bdev_detect_test ...[2024-12-06 21:28:44.289410] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:23.951 passed 00:06:23.951 Test: spdk_blobfs_bdev_create_test ...passed 00:06:23.951 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:23.951 00:06:23.951 [2024-12-06 21:28:44.289761] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:23.951 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.951 suites 1 1 n/a 0 0 00:06:23.951 tests 3 3 3 0 0 00:06:23.951 asserts 9 9 9 0 n/a 00:06:23.951 00:06:23.951 Elapsed time = 0.001 seconds 00:06:23.951 ************************************ 00:06:23.951 END TEST unittest_blob_blobfs 00:06:23.951 ************************************ 00:06:23.951 00:06:23.951 real 0m9.638s 00:06:23.951 user 0m9.118s 00:06:23.951 sys 0m0.647s 00:06:23.951 21:28:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:23.951 21:28:44 -- common/autotest_common.sh@10 -- # set +x 00:06:23.951 21:28:44 -- unit/unittest.sh@208 -- # run_test unittest_event unittest_event 00:06:23.951 21:28:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:23.951 21:28:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.951 21:28:44 -- common/autotest_common.sh@10 -- # set +x 00:06:23.951 ************************************ 00:06:23.951 START TEST unittest_event 00:06:23.951 ************************************ 00:06:23.951 21:28:44 -- common/autotest_common.sh@1114 -- # unittest_event 00:06:23.951 21:28:44 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:23.951 00:06:23.951 00:06:23.951 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.951 http://cunit.sourceforge.net/ 00:06:23.951 00:06:23.951 00:06:23.951 Suite: app_suite 00:06:23.951 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:23.951 options: 00:06:23.951 -c, --config JSON config file (default none) 00:06:23.951 --json JSON config file (default none) 00:06:23.951 --json-ignore-init-errors 00:06:23.951 don't exit on invalid config entry 00:06:23.951 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:23.951 -g, --single-file-segments 00:06:23.951 force creating just one hugetlbfs file 00:06:23.951 -h, --help show this usage 00:06:23.951 -i, --shm-id shared memory ID (optional) 00:06:23.951 app_ut: invalid option -- 'z' 00:06:23.951 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:23.951 --lcores lcore to CPU mapping list. The list is in the format: 00:06:23.951 [<,lcores[@CPUs]>...] 00:06:23.951 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:23.951 Within the group, '-' is used for range separator, 00:06:23.951 ',' is used for single number separator. 00:06:23.951 '( )' can be omitted for single element group, 00:06:23.951 '@' can be omitted if cpus and lcores have the same value 00:06:23.951 -n, --mem-channels channel number of memory channels used for DPDK 00:06:23.951 -p, --main-core main (primary) core for DPDK 00:06:23.951 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:23.951 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:23.951 --disable-cpumask-locks Disable CPU core lock files. 00:06:23.951 --silence-noticelog disable notice level logging to stderr 00:06:23.951 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:23.951 -u, --no-pci disable PCI access 00:06:23.951 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:23.951 --max-delay maximum reactor delay (in microseconds) 00:06:23.951 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:23.951 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:23.951 -R, --huge-unlink unlink huge files after initialization 00:06:23.951 -v, --version print SPDK version 00:06:23.951 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:23.951 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:23.951 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:23.951 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:23.951 Tracepoints vary in size and can use more than one trace entry. 00:06:23.951 --rpcs-allowed comma-separated list of permitted RPCS 00:06:23.951 --env-context Opaque context for use of the env implementation 00:06:23.951 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:23.951 --no-huge run without using hugepages 00:06:23.951 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:23.951 -e, --tpoint-group [:] 00:06:23.951 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:23.951 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:23.952 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:23.952 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:23.952 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:23.952 app_ut [options] 00:06:23.952 options: 00:06:23.952 -c, --config JSON config file (default none) 00:06:23.952 --json JSON config file (default none) 00:06:23.952 --json-ignore-init-errors 00:06:23.952 don't exit on invalid config entry 00:06:23.952 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:23.952 -g, --single-file-segments 00:06:23.952 force creating just one hugetlbfs file 00:06:23.952 -h, --help show this usage 00:06:23.952 -i, --shm-id shared memory ID (optional) 00:06:23.952 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:23.952 --lcores lcore to CPU mapping list. The list is in the format: 00:06:23.952 [<,lcores[@CPUs]>...] 00:06:23.952 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:23.952 Within the group, '-' is used for range separator, 00:06:23.952 ',' is used for single number separator. 00:06:23.952 '( )' can be omitted for single element group, 00:06:23.952 '@' can be omitted if cpus and lcores have the same value 00:06:23.952 -n, --mem-channels channel number of memory channels used for DPDK 00:06:23.952 -p, --main-core main (primary) core for DPDK 00:06:23.952 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:23.952 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:23.952 --disable-cpumask-locks Disable CPU core lock files. 00:06:23.952 --silence-noticelog disable notice level logging to stderr 00:06:23.952 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:23.952 -u, --no-pci disable PCI access 00:06:23.952 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:23.952 --max-delay maximum reactor delay (in microseconds) 00:06:23.952 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:23.952 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:23.952 -R, --huge-unlink unlink huge files after initialization 00:06:23.952 -v, --version print SPDK version 00:06:23.952 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:23.952 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:23.952 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:23.952 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:23.952 Tracepoints vary in size and can use more than one trace entry. 00:06:23.952 --rpcs-allowed comma-separated list of permitted RPCS 00:06:23.952 --env-context Opaque context for use of the env implementation 00:06:23.952 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:23.952 --no-huge run without using hugepages 00:06:23.952 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:23.952 -e, --tpoint-group [:] 00:06:23.952 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:23.952 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:23.952 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:23.952 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:23.952 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:23.952 app_ut: unrecognized option '--test-long-opt' 00:06:23.952 [2024-12-06 21:28:44.373509] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1030:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:23.952 app_ut [options] 00:06:23.952 options: 00:06:23.952 -c, --config JSON config file (default none) 00:06:23.952 --json JSON config file (default none) 00:06:23.952 --json-ignore-init-errors 00:06:23.952 don't exit on invalid config entry[2024-12-06 21:28:44.373763] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1211:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:23.952 00:06:23.952 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:23.952 -g, --single-file-segments 00:06:23.952 force creating just one hugetlbfs file 00:06:23.952 -h, --help show this usage 00:06:23.952 -i, --shm-id shared memory ID (optional) 00:06:23.952 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:23.952 --lcores lcore to CPU mapping list. The list is in the format: 00:06:23.952 [<,lcores[@CPUs]>...] 00:06:23.952 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:23.952 Within the group, '-' is used for range separator, 00:06:23.952 ',' is used for single number separator. 00:06:23.952 '( )' can be omitted for single element group, 00:06:23.952 '@' can be omitted if cpus and lcores have the same value 00:06:23.952 -n, --mem-channels channel number of memory channels used for DPDK 00:06:23.952 -p, --main-core main (primary) core for DPDK 00:06:23.952 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:23.952 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:23.952 --disable-cpumask-locks Disable CPU core lock files. 00:06:23.952 --silence-noticelog disable notice level logging to stderr 00:06:23.952 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:23.952 -u, --no-pci disable PCI access 00:06:23.952 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:23.952 --max-delay maximum reactor delay (in microseconds) 00:06:23.952 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:23.952 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:23.952 -R, --huge-unlink unlink huge files after initialization 00:06:23.952 -v, --version print SPDK version 00:06:23.952 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:23.952 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:23.952 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:23.952 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:23.952 Tracepoints vary in size and can use more than one trace entry. 00:06:23.952 --rpcs-allowed comma-separated list of permitted RPCS 00:06:23.952 --env-context Opaque context for use of the env implementation 00:06:23.952 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:23.952 --no-huge run without using hugepages 00:06:23.952 -L, --logflag enable log flag (all, json_util, log, rpc, thread, trace) 00:06:23.952 -e, --tpoint-group [:] 00:06:23.952 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:23.952 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:23.952 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:23.952 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:23.952 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:23.952 passed 00:06:23.952 00:06:23.952 [2024-12-06 21:28:44.373953] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1116:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:23.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.952 suites 1 1 n/a 0 0 00:06:23.952 tests 1 1 1 0 0 00:06:23.952 asserts 8 8 8 0 n/a 00:06:23.952 00:06:23.952 Elapsed time = 0.001 seconds 00:06:23.952 21:28:44 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:23.952 00:06:23.952 00:06:23.952 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.952 http://cunit.sourceforge.net/ 00:06:23.952 00:06:23.952 00:06:23.952 Suite: app_suite 00:06:23.952 Test: test_create_reactor ...passed 00:06:23.952 Test: test_init_reactors ...passed 00:06:23.952 Test: test_event_call ...passed 00:06:23.952 Test: test_schedule_thread ...passed 00:06:23.952 Test: test_reschedule_thread ...passed 00:06:23.952 Test: test_bind_thread ...passed 00:06:23.952 Test: test_for_each_reactor ...passed 00:06:23.952 Test: test_reactor_stats ...passed 00:06:23.952 Test: test_scheduler ...passed 00:06:23.952 Test: test_governor ...passed 00:06:23.952 00:06:23.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.952 suites 1 1 n/a 0 0 00:06:23.952 tests 10 10 10 0 0 00:06:23.952 asserts 344 344 344 0 n/a 00:06:23.952 00:06:23.952 Elapsed time = 0.030 seconds 00:06:24.211 00:06:24.211 real 0m0.099s 00:06:24.211 user 0m0.056s 00:06:24.211 sys 0m0.043s 00:06:24.211 21:28:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.211 ************************************ 00:06:24.211 END TEST unittest_event 00:06:24.211 21:28:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.211 ************************************ 00:06:24.211 21:28:44 -- unit/unittest.sh@209 -- # uname -s 00:06:24.211 21:28:44 -- unit/unittest.sh@209 -- # '[' Linux = Linux ']' 00:06:24.211 21:28:44 -- unit/unittest.sh@210 -- # run_test unittest_ftl unittest_ftl 00:06:24.211 21:28:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.211 21:28:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.211 21:28:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.211 ************************************ 00:06:24.211 START TEST unittest_ftl 00:06:24.211 ************************************ 00:06:24.211 21:28:44 -- common/autotest_common.sh@1114 -- # unittest_ftl 00:06:24.211 21:28:44 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:24.211 00:06:24.211 00:06:24.211 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.211 http://cunit.sourceforge.net/ 00:06:24.211 00:06:24.211 00:06:24.211 Suite: ftl_band_suite 00:06:24.211 Test: test_band_block_offset_from_addr_base ...passed 00:06:24.211 Test: test_band_block_offset_from_addr_offset ...passed 00:06:24.211 Test: test_band_addr_from_block_offset ...passed 00:06:24.211 Test: test_band_set_addr ...passed 00:06:24.211 Test: test_invalidate_addr ...passed 00:06:24.472 Test: test_next_xfer_addr ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 6 6 6 0 0 00:06:24.472 asserts 30356 30356 30356 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.194 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_bitmap 00:06:24.472 Test: test_ftl_bitmap_create ...[2024-12-06 21:28:44.789373] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:24.472 passed 00:06:24.472 Test: test_ftl_bitmap_get ...[2024-12-06 21:28:44.789663] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:24.472 passed 00:06:24.472 Test: test_ftl_bitmap_set ...passed 00:06:24.472 Test: test_ftl_bitmap_clear ...passed 00:06:24.472 Test: test_ftl_bitmap_find_first_set ...passed 00:06:24.472 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:24.472 Test: test_ftl_bitmap_count_set ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 7 7 7 0 0 00:06:24.472 asserts 137 137 137 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.001 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_io_suite 00:06:24.472 Test: test_completion ...passed 00:06:24.472 Test: test_multiple_ios ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 2 2 2 0 0 00:06:24.472 asserts 47 47 47 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.005 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_mngt 00:06:24.472 Test: test_next_step ...passed 00:06:24.472 Test: test_continue_step ...passed 00:06:24.472 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:24.472 Test: test_fail_step ...passed 00:06:24.472 Test: test_mngt_call_and_call_rollback ...passed 00:06:24.472 Test: test_nested_process_failure ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 6 6 6 0 0 00:06:24.472 asserts 176 176 176 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.002 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_mempool 00:06:24.472 Test: test_ftl_mempool_create ...passed 00:06:24.472 Test: test_ftl_mempool_get_put ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 2 2 2 0 0 00:06:24.472 asserts 36 36 36 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.000 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_addr64_suite 00:06:24.472 Test: test_addr_cached ...passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 1 1 1 0 0 00:06:24.472 asserts 1536 1536 1536 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.000 seconds 00:06:24.472 21:28:44 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:24.472 00:06:24.472 00:06:24.472 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.472 http://cunit.sourceforge.net/ 00:06:24.472 00:06:24.472 00:06:24.472 Suite: ftl_sb 00:06:24.472 Test: test_sb_crc_v2 ...passed 00:06:24.472 Test: test_sb_crc_v3 ...passed 00:06:24.472 Test: test_sb_v3_md_layout ...[2024-12-06 21:28:44.956791] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:24.472 [2024-12-06 21:28:44.957284] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:24.472 [2024-12-06 21:28:44.957336] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:24.472 [2024-12-06 21:28:44.957367] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:24.472 [2024-12-06 21:28:44.957693] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:24.472 [2024-12-06 21:28:44.957739] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:24.472 [2024-12-06 21:28:44.957773] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:24.472 [2024-12-06 21:28:44.957942] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:24.472 [2024-12-06 21:28:44.958171] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:24.472 [2024-12-06 21:28:44.958219] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:24.472 passed 00:06:24.472 Test: test_sb_v5_md_layout ...[2024-12-06 21:28:44.958697] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:24.472 passed 00:06:24.472 00:06:24.472 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.472 suites 1 1 n/a 0 0 00:06:24.472 tests 4 4 4 0 0 00:06:24.472 asserts 148 148 148 0 n/a 00:06:24.472 00:06:24.472 Elapsed time = 0.004 seconds 00:06:24.735 21:28:44 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:24.735 00:06:24.735 00:06:24.735 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.735 http://cunit.sourceforge.net/ 00:06:24.735 00:06:24.735 00:06:24.735 Suite: ftl_layout_upgrade 00:06:24.735 Test: test_l2p_upgrade ...passed 00:06:24.735 00:06:24.735 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.735 suites 1 1 n/a 0 0 00:06:24.735 tests 1 1 1 0 0 00:06:24.735 asserts 140 140 140 0 n/a 00:06:24.735 00:06:24.735 Elapsed time = 0.001 seconds 00:06:24.735 00:06:24.735 real 0m0.490s 00:06:24.735 user 0m0.203s 00:06:24.735 sys 0m0.285s 00:06:24.735 21:28:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.735 21:28:44 -- common/autotest_common.sh@10 -- # set +x 00:06:24.735 ************************************ 00:06:24.735 END TEST unittest_ftl 00:06:24.735 ************************************ 00:06:24.735 21:28:45 -- unit/unittest.sh@213 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:24.735 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.735 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.735 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.735 ************************************ 00:06:24.735 START TEST unittest_accel 00:06:24.735 ************************************ 00:06:24.735 21:28:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:24.735 00:06:24.735 00:06:24.735 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.735 http://cunit.sourceforge.net/ 00:06:24.735 00:06:24.735 00:06:24.735 Suite: accel_sequence 00:06:24.735 Test: test_sequence_fill_copy ...passed 00:06:24.735 Test: test_sequence_abort ...passed 00:06:24.735 Test: test_sequence_append_error ...passed 00:06:24.735 Test: test_sequence_completion_error ...[2024-12-06 21:28:45.074173] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x79b2a099b7c0 00:06:24.735 [2024-12-06 21:28:45.074427] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x79b2a099b7c0 00:06:24.735 [2024-12-06 21:28:45.074533] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x79b2a099b7c0 00:06:24.735 [2024-12-06 21:28:45.074596] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x79b2a099b7c0 00:06:24.735 passed 00:06:24.735 Test: test_sequence_decompress ...passed 00:06:24.735 Test: test_sequence_reverse ...passed 00:06:24.735 Test: test_sequence_copy_elision ...passed 00:06:24.735 Test: test_sequence_accel_buffers ...passed 00:06:24.735 Test: test_sequence_memory_domain ...[2024-12-06 21:28:45.087911] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:24.735 [2024-12-06 21:28:45.088119] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:24.735 passed 00:06:24.735 Test: test_sequence_module_memory_domain ...passed 00:06:24.735 Test: test_sequence_crypto ...passed 00:06:24.735 Test: test_sequence_driver ...[2024-12-06 21:28:45.095926] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x79b29dd147c0 using driver: ut 00:06:24.735 [2024-12-06 21:28:45.096019] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x79b29dd147c0 through driver: ut 00:06:24.735 passed 00:06:24.735 Test: test_sequence_same_iovs ...passed 00:06:24.735 Test: test_sequence_crc32 ...passed 00:06:24.735 Suite: accel 00:06:24.735 Test: test_spdk_accel_task_complete ...passed 00:06:24.735 Test: test_get_task ...passed 00:06:24.736 Test: test_spdk_accel_submit_copy ...passed 00:06:24.736 Test: test_spdk_accel_submit_dualcast ...[2024-12-06 21:28:45.102013] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:24.736 [2024-12-06 21:28:45.102072] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:24.736 passed 00:06:24.736 Test: test_spdk_accel_submit_compare ...passed 00:06:24.736 Test: test_spdk_accel_submit_fill ...passed 00:06:24.736 Test: test_spdk_accel_submit_crc32c ...passed 00:06:24.736 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:24.736 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:24.736 Test: test_spdk_accel_submit_xor ...passed 00:06:24.736 Test: test_spdk_accel_module_find_by_name ...passed 00:06:24.736 Test: test_spdk_accel_module_register ...passed 00:06:24.736 00:06:24.736 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.736 suites 2 2 n/a 0 0 00:06:24.736 tests 26 26 26 0 0 00:06:24.736 asserts 831 831 831 0 n/a 00:06:24.736 00:06:24.736 Elapsed time = 0.042 seconds 00:06:24.736 00:06:24.736 real 0m0.081s 00:06:24.736 user 0m0.042s 00:06:24.736 sys 0m0.040s 00:06:24.736 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.736 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 ************************************ 00:06:24.736 END TEST unittest_accel 00:06:24.736 ************************************ 00:06:24.736 21:28:45 -- unit/unittest.sh@214 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:24.736 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.736 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.736 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 ************************************ 00:06:24.736 START TEST unittest_ioat 00:06:24.736 ************************************ 00:06:24.736 21:28:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:24.736 00:06:24.736 00:06:24.736 CUnit - A unit testing framework for C - Version 2.1-3 00:06:24.736 http://cunit.sourceforge.net/ 00:06:24.736 00:06:24.736 00:06:24.736 Suite: ioat 00:06:24.736 Test: ioat_state_check ...passed 00:06:24.736 00:06:24.736 Run Summary: Type Total Ran Passed Failed Inactive 00:06:24.736 suites 1 1 n/a 0 0 00:06:24.736 tests 1 1 1 0 0 00:06:24.736 asserts 32 32 32 0 n/a 00:06:24.736 00:06:24.736 Elapsed time = 0.000 seconds 00:06:24.736 00:06:24.736 real 0m0.028s 00:06:24.736 user 0m0.011s 00:06:24.736 sys 0m0.017s 00:06:24.736 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:24.736 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 ************************************ 00:06:24.736 END TEST unittest_ioat 00:06:24.736 ************************************ 00:06:25.000 21:28:45 -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:25.000 21:28:45 -- unit/unittest.sh@216 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:25.000 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.000 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.000 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.000 ************************************ 00:06:25.000 START TEST unittest_idxd_user 00:06:25.000 ************************************ 00:06:25.000 21:28:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:25.000 00:06:25.000 00:06:25.000 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.000 http://cunit.sourceforge.net/ 00:06:25.000 00:06:25.000 00:06:25.000 Suite: idxd_user 00:06:25.000 Test: test_idxd_wait_cmd ...[2024-12-06 21:28:45.275598] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:25.000 passed 00:06:25.000 Test: test_idxd_reset_dev ...[2024-12-06 21:28:45.275797] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:25.000 [2024-12-06 21:28:45.275875] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:25.000 passed 00:06:25.000 Test: test_idxd_group_config ...passed 00:06:25.000 Test: test_idxd_wq_config ...passed 00:06:25.000 00:06:25.000 [2024-12-06 21:28:45.275905] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:25.000 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.000 suites 1 1 n/a 0 0 00:06:25.000 tests 4 4 4 0 0 00:06:25.000 asserts 20 20 20 0 n/a 00:06:25.000 00:06:25.000 Elapsed time = 0.001 seconds 00:06:25.000 00:06:25.000 real 0m0.033s 00:06:25.000 user 0m0.012s 00:06:25.000 sys 0m0.022s 00:06:25.000 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.000 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.000 ************************************ 00:06:25.000 END TEST unittest_idxd_user 00:06:25.000 ************************************ 00:06:25.000 21:28:45 -- unit/unittest.sh@218 -- # run_test unittest_iscsi unittest_iscsi 00:06:25.000 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.000 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.000 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.000 ************************************ 00:06:25.000 START TEST unittest_iscsi 00:06:25.000 ************************************ 00:06:25.000 21:28:45 -- common/autotest_common.sh@1114 -- # unittest_iscsi 00:06:25.000 21:28:45 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:25.000 00:06:25.000 00:06:25.000 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.000 http://cunit.sourceforge.net/ 00:06:25.000 00:06:25.000 00:06:25.000 Suite: conn_suite 00:06:25.000 Test: read_task_split_in_order_case ...passed 00:06:25.000 Test: read_task_split_reverse_order_case ...passed 00:06:25.000 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:25.000 Test: process_non_read_task_completion_test ...passed 00:06:25.000 Test: free_tasks_on_connection ...passed 00:06:25.000 Test: free_tasks_with_queued_datain ...passed 00:06:25.000 Test: abort_queued_datain_task_test ...passed 00:06:25.000 Test: abort_queued_datain_tasks_test ...passed 00:06:25.000 00:06:25.000 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.000 suites 1 1 n/a 0 0 00:06:25.000 tests 8 8 8 0 0 00:06:25.000 asserts 230 230 230 0 n/a 00:06:25.000 00:06:25.000 Elapsed time = 0.000 seconds 00:06:25.000 21:28:45 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:25.000 00:06:25.000 00:06:25.000 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.000 http://cunit.sourceforge.net/ 00:06:25.000 00:06:25.000 00:06:25.000 Suite: iscsi_suite 00:06:25.000 Test: param_negotiation_test ...passed 00:06:25.000 Test: list_negotiation_test ...passed 00:06:25.000 Test: parse_valid_test ...passed 00:06:25.000 Test: parse_invalid_test ...[2024-12-06 21:28:45.398336] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:25.000 [2024-12-06 21:28:45.398625] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 202:iscsi_parse_param: *ERROR*: '=' not found 00:06:25.000 [2024-12-06 21:28:45.398678] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 208:iscsi_parse_param: *ERROR*: Empty key 00:06:25.000 [2024-12-06 21:28:45.398726] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:25.000 [2024-12-06 21:28:45.398834] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 248:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:25.000 [2024-12-06 21:28:45.398896] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 215:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:25.000 passed 00:06:25.000 00:06:25.000 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.000 suites 1 1 n/a 0 0 00:06:25.000 tests 4 4 4 0 0 00:06:25.000 asserts 161 161 161 0 n/a 00:06:25.000 00:06:25.000 Elapsed time = 0.005 seconds 00:06:25.000 [2024-12-06 21:28:45.398965] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 229:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:25.000 21:28:45 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:25.000 00:06:25.000 00:06:25.000 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.000 http://cunit.sourceforge.net/ 00:06:25.000 00:06:25.000 00:06:25.000 Suite: iscsi_target_node_suite 00:06:25.000 Test: add_lun_test_cases ...[2024-12-06 21:28:45.427665] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:25.000 [2024-12-06 21:28:45.428241] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:25.000 [2024-12-06 21:28:45.428313] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:25.000 passed 00:06:25.000 Test: allow_any_allowed ...passed 00:06:25.000 Test: allow_ipv6_allowed ...[2024-12-06 21:28:45.428349] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:25.000 [2024-12-06 21:28:45.428383] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:25.000 passed 00:06:25.000 Test: allow_ipv6_denied ...passed 00:06:25.000 Test: allow_ipv6_invalid ...passed 00:06:25.000 Test: allow_ipv4_allowed ...passed 00:06:25.000 Test: allow_ipv4_denied ...passed 00:06:25.000 Test: allow_ipv4_invalid ...passed 00:06:25.000 Test: node_access_allowed ...passed 00:06:25.000 Test: node_access_denied_by_empty_netmask ...passed 00:06:25.000 Test: node_access_multi_initiator_groups_cases ...passed 00:06:25.000 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:25.000 Test: chap_param_test_cases ...[2024-12-06 21:28:45.429928] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:25.001 [2024-12-06 21:28:45.430330] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:25.001 passed 00:06:25.001 00:06:25.001 [2024-12-06 21:28:45.430381] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:25.001 [2024-12-06 21:28:45.430418] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:25.001 [2024-12-06 21:28:45.430482] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:25.001 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.001 suites 1 1 n/a 0 0 00:06:25.001 tests 13 13 13 0 0 00:06:25.001 asserts 50 50 50 0 n/a 00:06:25.001 00:06:25.001 Elapsed time = 0.003 seconds 00:06:25.001 21:28:45 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:25.001 00:06:25.001 00:06:25.001 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.001 http://cunit.sourceforge.net/ 00:06:25.001 00:06:25.001 00:06:25.001 Suite: iscsi_suite 00:06:25.001 Test: op_login_check_target_test ...passed 00:06:25.001 Test: op_login_session_normal_test ...[2024-12-06 21:28:45.470201] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:25.001 [2024-12-06 21:28:45.470496] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:25.001 [2024-12-06 21:28:45.470545] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:25.001 [2024-12-06 21:28:45.470576] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:25.001 [2024-12-06 21:28:45.470624] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:25.001 passed 00:06:25.001 Test: maxburstlength_test ...[2024-12-06 21:28:45.470683] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:25.001 [2024-12-06 21:28:45.470748] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:25.001 [2024-12-06 21:28:45.470774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:25.001 passed 00:06:25.001 Test: underflow_for_read_transfer_test ...[2024-12-06 21:28:45.471048] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:25.001 [2024-12-06 21:28:45.471113] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:25.001 passed 00:06:25.001 Test: underflow_for_zero_read_transfer_test ...passed 00:06:25.001 Test: underflow_for_request_sense_test ...passed 00:06:25.001 Test: underflow_for_check_condition_test ...passed 00:06:25.001 Test: add_transfer_task_test ...passed 00:06:25.001 Test: get_transfer_task_test ...passed 00:06:25.001 Test: del_transfer_task_test ...passed 00:06:25.001 Test: clear_all_transfer_tasks_test ...passed 00:06:25.001 Test: build_iovs_test ...passed 00:06:25.001 Test: build_iovs_with_md_test ...passed 00:06:25.001 Test: pdu_hdr_op_login_test ...[2024-12-06 21:28:45.472795] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:25.001 [2024-12-06 21:28:45.472901] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:25.001 [2024-12-06 21:28:45.472964] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_text_test ...[2024-12-06 21:28:45.473072] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:25.001 [2024-12-06 21:28:45.473131] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_logout_test ...[2024-12-06 21:28:45.473163] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:25.001 [2024-12-06 21:28:45.473249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_scsi_test ...[2024-12-06 21:28:45.473360] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:25.001 [2024-12-06 21:28:45.473396] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:25.001 [2024-12-06 21:28:45.473460] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:25.001 [2024-12-06 21:28:45.473547] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:25.001 [2024-12-06 21:28:45.473629] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_task_mgmt_test ...[2024-12-06 21:28:45.473774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:25.001 [2024-12-06 21:28:45.473874] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:25.001 [2024-12-06 21:28:45.473958] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_nopout_test ...[2024-12-06 21:28:45.474167] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:25.001 [2024-12-06 21:28:45.474234] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:25.001 [2024-12-06 21:28:45.474271] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:25.001 [2024-12-06 21:28:45.474293] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:25.001 passed 00:06:25.001 Test: pdu_hdr_op_data_test ...[2024-12-06 21:28:45.474345] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:25.001 [2024-12-06 21:28:45.474397] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:25.001 [2024-12-06 21:28:45.474455] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:25.001 [2024-12-06 21:28:45.474480] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:25.001 [2024-12-06 21:28:45.474534] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:25.001 [2024-12-06 21:28:45.474583] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:25.001 [2024-12-06 21:28:45.474611] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:25.001 passed 00:06:25.001 Test: empty_text_with_cbit_test ...passed 00:06:25.001 Test: pdu_payload_read_test ...[2024-12-06 21:28:45.476734] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:25.001 passed 00:06:25.001 Test: data_out_pdu_sequence_test ...passed 00:06:25.001 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:25.001 00:06:25.001 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.001 suites 1 1 n/a 0 0 00:06:25.001 tests 24 24 24 0 0 00:06:25.001 asserts 150253 150253 150253 0 n/a 00:06:25.001 00:06:25.001 Elapsed time = 0.017 seconds 00:06:25.260 21:28:45 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:25.260 00:06:25.260 00:06:25.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.260 http://cunit.sourceforge.net/ 00:06:25.260 00:06:25.260 00:06:25.260 Suite: init_grp_suite 00:06:25.260 Test: create_initiator_group_success_case ...passed 00:06:25.260 Test: find_initiator_group_success_case ...passed 00:06:25.260 Test: register_initiator_group_twice_case ...passed 00:06:25.260 Test: add_initiator_name_success_case ...passed 00:06:25.260 Test: add_initiator_name_fail_case ...passed 00:06:25.260 Test: delete_all_initiator_names_success_case ...[2024-12-06 21:28:45.515998] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:25.260 passed 00:06:25.260 Test: add_netmask_success_case ...passed 00:06:25.260 Test: add_netmask_fail_case ...passed 00:06:25.260 Test: delete_all_netmasks_success_case ...[2024-12-06 21:28:45.516725] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:25.260 passed 00:06:25.260 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:25.260 Test: netmask_overwrite_all_to_any_case ...passed 00:06:25.260 Test: add_delete_initiator_names_case ...passed 00:06:25.260 Test: add_duplicated_initiator_names_case ...passed 00:06:25.260 Test: delete_nonexisting_initiator_names_case ...passed 00:06:25.260 Test: add_delete_netmasks_case ...passed 00:06:25.260 Test: add_duplicated_netmasks_case ...passed 00:06:25.260 Test: delete_nonexisting_netmasks_case ...passed 00:06:25.260 00:06:25.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.260 suites 1 1 n/a 0 0 00:06:25.260 tests 17 17 17 0 0 00:06:25.260 asserts 108 108 108 0 n/a 00:06:25.260 00:06:25.260 Elapsed time = 0.002 seconds 00:06:25.260 21:28:45 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:25.260 00:06:25.260 00:06:25.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.260 http://cunit.sourceforge.net/ 00:06:25.260 00:06:25.260 00:06:25.260 Suite: portal_grp_suite 00:06:25.260 Test: portal_create_ipv4_normal_case ...passed 00:06:25.260 Test: portal_create_ipv6_normal_case ...passed 00:06:25.260 Test: portal_create_ipv4_wildcard_case ...passed 00:06:25.260 Test: portal_create_ipv6_wildcard_case ...passed 00:06:25.260 Test: portal_create_twice_case ...passed 00:06:25.260 Test: portal_grp_register_unregister_case ...[2024-12-06 21:28:45.548407] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:25.260 passed 00:06:25.260 Test: portal_grp_register_twice_case ...passed 00:06:25.260 Test: portal_grp_add_delete_case ...passed 00:06:25.260 Test: portal_grp_add_delete_twice_case ...passed 00:06:25.260 00:06:25.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.260 suites 1 1 n/a 0 0 00:06:25.260 tests 9 9 9 0 0 00:06:25.260 asserts 44 44 44 0 n/a 00:06:25.260 00:06:25.260 Elapsed time = 0.004 seconds 00:06:25.260 00:06:25.260 real 0m0.230s 00:06:25.260 user 0m0.114s 00:06:25.260 sys 0m0.119s 00:06:25.260 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.260 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.260 ************************************ 00:06:25.260 END TEST unittest_iscsi 00:06:25.260 ************************************ 00:06:25.260 21:28:45 -- unit/unittest.sh@219 -- # run_test unittest_json unittest_json 00:06:25.260 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.260 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.260 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.260 ************************************ 00:06:25.260 START TEST unittest_json 00:06:25.260 ************************************ 00:06:25.260 21:28:45 -- common/autotest_common.sh@1114 -- # unittest_json 00:06:25.260 21:28:45 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:25.260 00:06:25.260 00:06:25.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.260 http://cunit.sourceforge.net/ 00:06:25.260 00:06:25.260 00:06:25.260 Suite: json 00:06:25.260 Test: test_parse_literal ...passed 00:06:25.260 Test: test_parse_string_simple ...passed 00:06:25.260 Test: test_parse_string_control_chars ...passed 00:06:25.260 Test: test_parse_string_utf8 ...passed 00:06:25.260 Test: test_parse_string_escapes_twochar ...passed 00:06:25.260 Test: test_parse_string_escapes_unicode ...passed 00:06:25.260 Test: test_parse_number ...passed 00:06:25.260 Test: test_parse_array ...passed 00:06:25.260 Test: test_parse_object ...passed 00:06:25.260 Test: test_parse_nesting ...passed 00:06:25.260 Test: test_parse_comment ...passed 00:06:25.260 00:06:25.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.260 suites 1 1 n/a 0 0 00:06:25.260 tests 11 11 11 0 0 00:06:25.260 asserts 1516 1516 1516 0 n/a 00:06:25.260 00:06:25.260 Elapsed time = 0.002 seconds 00:06:25.260 21:28:45 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:25.260 00:06:25.260 00:06:25.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.260 http://cunit.sourceforge.net/ 00:06:25.260 00:06:25.260 00:06:25.260 Suite: json 00:06:25.260 Test: test_strequal ...passed 00:06:25.260 Test: test_num_to_uint16 ...passed 00:06:25.260 Test: test_num_to_int32 ...passed 00:06:25.260 Test: test_num_to_uint64 ...passed 00:06:25.260 Test: test_decode_object ...passed 00:06:25.260 Test: test_decode_array ...passed 00:06:25.260 Test: test_decode_bool ...passed 00:06:25.260 Test: test_decode_uint16 ...passed 00:06:25.260 Test: test_decode_int32 ...passed 00:06:25.260 Test: test_decode_uint32 ...passed 00:06:25.260 Test: test_decode_uint64 ...passed 00:06:25.260 Test: test_decode_string ...passed 00:06:25.260 Test: test_decode_uuid ...passed 00:06:25.260 Test: test_find ...passed 00:06:25.260 Test: test_find_array ...passed 00:06:25.260 Test: test_iterating ...passed 00:06:25.260 Test: test_free_object ...passed 00:06:25.260 00:06:25.260 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.260 suites 1 1 n/a 0 0 00:06:25.260 tests 17 17 17 0 0 00:06:25.260 asserts 236 236 236 0 n/a 00:06:25.260 00:06:25.260 Elapsed time = 0.001 seconds 00:06:25.260 21:28:45 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:25.260 00:06:25.260 00:06:25.260 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.260 http://cunit.sourceforge.net/ 00:06:25.260 00:06:25.260 00:06:25.260 Suite: json 00:06:25.260 Test: test_write_literal ...passed 00:06:25.260 Test: test_write_string_simple ...passed 00:06:25.260 Test: test_write_string_escapes ...passed 00:06:25.260 Test: test_write_string_utf16le ...passed 00:06:25.260 Test: test_write_number_int32 ...passed 00:06:25.260 Test: test_write_number_uint32 ...passed 00:06:25.260 Test: test_write_number_uint128 ...passed 00:06:25.260 Test: test_write_string_number_uint128 ...passed 00:06:25.260 Test: test_write_number_int64 ...passed 00:06:25.260 Test: test_write_number_uint64 ...passed 00:06:25.260 Test: test_write_number_double ...passed 00:06:25.260 Test: test_write_uuid ...passed 00:06:25.260 Test: test_write_array ...passed 00:06:25.261 Test: test_write_object ...passed 00:06:25.261 Test: test_write_nesting ...passed 00:06:25.261 Test: test_write_val ...passed 00:06:25.261 00:06:25.261 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.261 suites 1 1 n/a 0 0 00:06:25.261 tests 16 16 16 0 0 00:06:25.261 asserts 918 918 918 0 n/a 00:06:25.261 00:06:25.261 Elapsed time = 0.004 seconds 00:06:25.261 21:28:45 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:25.261 00:06:25.261 00:06:25.261 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.261 http://cunit.sourceforge.net/ 00:06:25.261 00:06:25.261 00:06:25.261 Suite: jsonrpc 00:06:25.261 Test: test_parse_request ...passed 00:06:25.261 Test: test_parse_request_streaming ...passed 00:06:25.261 00:06:25.261 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.261 suites 1 1 n/a 0 0 00:06:25.261 tests 2 2 2 0 0 00:06:25.261 asserts 289 289 289 0 n/a 00:06:25.261 00:06:25.261 Elapsed time = 0.004 seconds 00:06:25.261 00:06:25.261 real 0m0.133s 00:06:25.261 user 0m0.072s 00:06:25.261 sys 0m0.063s 00:06:25.261 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.261 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.261 ************************************ 00:06:25.261 END TEST unittest_json 00:06:25.261 ************************************ 00:06:25.519 21:28:45 -- unit/unittest.sh@220 -- # run_test unittest_rpc unittest_rpc 00:06:25.519 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.519 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.519 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.519 ************************************ 00:06:25.519 START TEST unittest_rpc 00:06:25.519 ************************************ 00:06:25.519 21:28:45 -- common/autotest_common.sh@1114 -- # unittest_rpc 00:06:25.519 21:28:45 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:25.519 00:06:25.519 00:06:25.519 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.519 http://cunit.sourceforge.net/ 00:06:25.519 00:06:25.519 00:06:25.519 Suite: rpc 00:06:25.519 Test: test_jsonrpc_handler ...passed 00:06:25.519 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:25.519 Test: test_rpc_get_methods ...passed 00:06:25.519 Test: test_rpc_spdk_get_version ...passed 00:06:25.519 Test: test_spdk_rpc_listen_close ...passed 00:06:25.519 00:06:25.519 [2024-12-06 21:28:45.814110] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 378:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:25.519 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.519 suites 1 1 n/a 0 0 00:06:25.519 tests 5 5 5 0 0 00:06:25.519 asserts 20 20 20 0 n/a 00:06:25.519 00:06:25.519 Elapsed time = 0.000 seconds 00:06:25.519 00:06:25.519 real 0m0.030s 00:06:25.519 user 0m0.017s 00:06:25.519 sys 0m0.013s 00:06:25.519 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.519 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.519 ************************************ 00:06:25.519 END TEST unittest_rpc 00:06:25.519 ************************************ 00:06:25.519 21:28:45 -- unit/unittest.sh@221 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:25.519 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.519 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.519 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.519 ************************************ 00:06:25.519 START TEST unittest_notify 00:06:25.519 ************************************ 00:06:25.519 21:28:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:25.519 00:06:25.519 00:06:25.519 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.519 http://cunit.sourceforge.net/ 00:06:25.519 00:06:25.519 00:06:25.519 Suite: app_suite 00:06:25.520 Test: notify ...passed 00:06:25.520 00:06:25.520 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.520 suites 1 1 n/a 0 0 00:06:25.520 tests 1 1 1 0 0 00:06:25.520 asserts 13 13 13 0 n/a 00:06:25.520 00:06:25.520 Elapsed time = 0.000 seconds 00:06:25.520 00:06:25.520 real 0m0.027s 00:06:25.520 user 0m0.013s 00:06:25.520 sys 0m0.014s 00:06:25.520 21:28:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.520 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.520 ************************************ 00:06:25.520 END TEST unittest_notify 00:06:25.520 ************************************ 00:06:25.520 21:28:45 -- unit/unittest.sh@222 -- # run_test unittest_nvme unittest_nvme 00:06:25.520 21:28:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:25.520 21:28:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.520 21:28:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.520 ************************************ 00:06:25.520 START TEST unittest_nvme 00:06:25.520 ************************************ 00:06:25.520 21:28:45 -- common/autotest_common.sh@1114 -- # unittest_nvme 00:06:25.520 21:28:45 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:25.520 00:06:25.520 00:06:25.520 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.520 http://cunit.sourceforge.net/ 00:06:25.520 00:06:25.520 00:06:25.520 Suite: nvme 00:06:25.520 Test: test_opc_data_transfer ...passed 00:06:25.520 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:25.520 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:25.520 Test: test_trid_parse_and_compare ...[2024-12-06 21:28:45.966669] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:25.520 [2024-12-06 21:28:45.966909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:25.520 [2024-12-06 21:28:45.966960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:25.520 [2024-12-06 21:28:45.966985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:25.520 [2024-12-06 21:28:45.967017] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:25.520 [2024-12-06 21:28:45.967048] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:25.520 passed 00:06:25.520 Test: test_trid_trtype_str ...passed 00:06:25.520 Test: test_trid_adrfam_str ...passed 00:06:25.520 Test: test_nvme_ctrlr_probe ...[2024-12-06 21:28:45.967392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:25.520 passed 00:06:25.520 Test: test_spdk_nvme_probe ...[2024-12-06 21:28:45.967521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:25.520 [2024-12-06 21:28:45.967568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:25.520 [2024-12-06 21:28:45.967704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:25.520 [2024-12-06 21:28:45.967765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context fapassed 00:06:25.520 Test: test_spdk_nvme_connect ...iled 00:06:25.520 [2024-12-06 21:28:45.967881] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:25.520 passed 00:06:25.520 Test: test_nvme_ctrlr_probe_internal ...[2024-12-06 21:28:45.968300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:25.520 [2024-12-06 21:28:45.968359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:25.520 [2024-12-06 21:28:45.968543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:25.520 passed 00:06:25.520 Test: test_nvme_init_controllers ...passed[2024-12-06 21:28:45.968609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:25.520 [2024-12-06 21:28:45.968698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:25.520 00:06:25.520 Test: test_nvme_driver_init ...[2024-12-06 21:28:45.968794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:25.520 [2024-12-06 21:28:45.968842] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:25.779 [2024-12-06 21:28:46.083157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:25.779 [2024-12-06 21:28:46.083310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:25.779 passed 00:06:25.779 Test: test_spdk_nvme_detach ...passed 00:06:25.779 Test: test_nvme_completion_poll_cb ...passed 00:06:25.779 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:25.779 Test: test_nvme_allocate_request_null ...passed 00:06:25.779 Test: test_nvme_allocate_request ...passed 00:06:25.779 Test: test_nvme_free_request ...passed 00:06:25.779 Test: test_nvme_allocate_request_user_copy ...passed 00:06:25.779 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:25.779 Test: test_nvme_request_check_timeout ...passed 00:06:25.779 Test: test_nvme_wait_for_completion ...passed 00:06:25.779 Test: test_spdk_nvme_parse_func ...passed 00:06:25.779 Test: test_spdk_nvme_detach_async ...passed 00:06:25.779 Test: test_nvme_parse_addr ...[2024-12-06 21:28:46.084675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:25.779 passed 00:06:25.779 00:06:25.779 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.779 suites 1 1 n/a 0 0 00:06:25.779 tests 25 25 25 0 0 00:06:25.779 asserts 326 326 326 0 n/a 00:06:25.779 00:06:25.779 Elapsed time = 0.007 seconds 00:06:25.779 21:28:46 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:25.779 00:06:25.779 00:06:25.779 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.779 http://cunit.sourceforge.net/ 00:06:25.779 00:06:25.779 00:06:25.779 Suite: nvme_ctrlr 00:06:25.779 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-12-06 21:28:46.117312] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.779 passed 00:06:25.779 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-12-06 21:28:46.119670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.779 passed 00:06:25.779 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-12-06 21:28:46.120989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.779 passed 00:06:25.779 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-12-06 21:28:46.122244] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.779 passed 00:06:25.780 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-12-06 21:28:46.123531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 [2024-12-06 21:28:46.124694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-06 21:28:46.125902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-06 21:28:46.127063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:25.780 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-12-06 21:28:46.129509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 [2024-12-06 21:28:46.131840] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-06 21:28:46.133071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:25.780 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-12-06 21:28:46.135522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 [2024-12-06 21:28:46.136755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-12-06 21:28:46.139105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3934:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:25.780 Test: test_nvme_ctrlr_init_delay ...[2024-12-06 21:28:46.141636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 passed 00:06:25.780 Test: test_alloc_io_qpair_rr_1 ...[2024-12-06 21:28:46.143488] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 [2024-12-06 21:28:46.144171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:25.780 [2024-12-06 21:28:46.144310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:25.780 [2024-12-06 21:28:46.144700] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:25.780 passed 00:06:25.780 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-12-06 21:28:46.144789] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:25.780 passed 00:06:25.780 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:25.780 Test: test_alloc_io_qpair_wrr_1 ...[2024-12-06 21:28:46.145314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 passed 00:06:25.780 Test: test_alloc_io_qpair_wrr_2 ...[2024-12-06 21:28:46.145957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:25.780 [2024-12-06 21:28:46.146128] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:25.780 passed 00:06:25.780 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-12-06 21:28:46.146846] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4846:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:25.780 [2024-12-06 21:28:46.146992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:25.780 [2024-12-06 21:28:46.147088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4923:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:25.780 [2024-12-06 21:28:46.147618] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4883:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:25.780 passed 00:06:25.780 Test: test_nvme_ctrlr_fail ...passed 00:06:25.780 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-12-06 21:28:46.147710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:25.780 passed 00:06:25.780 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:25.780 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:25.780 Test: test_nvme_ctrlr_test_active_ns ...[2024-12-06 21:28:46.149008] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:26.040 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:26.040 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:26.040 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-12-06 21:28:46.487021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-12-06 21:28:46.494401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-12-06 21:28:46.495786] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 [2024-12-06 21:28:46.495867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2870:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:26.040 passed 00:06:26.040 Test: test_alloc_io_qpair_fail ...[2024-12-06 21:28:46.497079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:26.040 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-12-06 21:28:46.497165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_set_state ...passed 00:06:26.040 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-12-06 21:28:46.497682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1465:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:26.040 [2024-12-06 21:28:46.497984] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.040 passed 00:06:26.040 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-12-06 21:28:46.522942] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_ns_mgmt ...[2024-12-06 21:28:46.572307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_reset ...[2024-12-06 21:28:46.574165] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_aer_callback ...[2024-12-06 21:28:46.574833] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-12-06 21:28:46.576562] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:26.300 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:26.300 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-12-06 21:28:46.579145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:26.300 Test: test_nvme_ctrlr_ana_resize ...[2024-12-06 21:28:46.580638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:26.300 Test: test_nvme_transport_ctrlr_ready ...[2024-12-06 21:28:46.582889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4016:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:26.300 passed 00:06:26.300 Test: test_nvme_ctrlr_disable ...[2024-12-06 21:28:46.582955] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4067:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:26.300 [2024-12-06 21:28:46.583001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4135:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:26.300 passed 00:06:26.300 00:06:26.300 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.300 suites 1 1 n/a 0 0 00:06:26.300 tests 43 43 43 0 0 00:06:26.300 asserts 10418 10418 10418 0 n/a 00:06:26.300 00:06:26.300 Elapsed time = 0.426 seconds 00:06:26.300 21:28:46 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:26.300 00:06:26.300 00:06:26.300 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.300 http://cunit.sourceforge.net/ 00:06:26.300 00:06:26.300 00:06:26.300 Suite: nvme_ctrlr_cmd 00:06:26.300 Test: test_get_log_pages ...passed 00:06:26.300 Test: test_set_feature_cmd ...passed 00:06:26.300 Test: test_set_feature_ns_cmd ...passed 00:06:26.300 Test: test_get_feature_cmd ...passed 00:06:26.300 Test: test_get_feature_ns_cmd ...passed 00:06:26.300 Test: test_abort_cmd ...passed 00:06:26.300 Test: test_set_host_id_cmds ...[2024-12-06 21:28:46.633531] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:26.300 passed 00:06:26.300 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:26.300 Test: test_io_raw_cmd ...passed 00:06:26.300 Test: test_io_raw_cmd_with_md ...passed 00:06:26.300 Test: test_namespace_attach ...passed 00:06:26.300 Test: test_namespace_detach ...passed 00:06:26.300 Test: test_namespace_create ...passed 00:06:26.300 Test: test_namespace_delete ...passed 00:06:26.300 Test: test_doorbell_buffer_config ...passed 00:06:26.300 Test: test_format_nvme ...passed 00:06:26.300 Test: test_fw_commit ...passed 00:06:26.300 Test: test_fw_image_download ...passed 00:06:26.300 Test: test_sanitize ...passed 00:06:26.300 Test: test_directive ...passed 00:06:26.300 Test: test_nvme_request_add_abort ...passed 00:06:26.300 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:26.300 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:26.300 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:26.300 00:06:26.300 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.300 suites 1 1 n/a 0 0 00:06:26.300 tests 24 24 24 0 0 00:06:26.300 asserts 198 198 198 0 n/a 00:06:26.300 00:06:26.300 Elapsed time = 0.001 seconds 00:06:26.300 21:28:46 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:26.300 00:06:26.300 00:06:26.300 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.300 http://cunit.sourceforge.net/ 00:06:26.300 00:06:26.300 00:06:26.300 Suite: nvme_ctrlr_cmd 00:06:26.300 Test: test_geometry_cmd ...passed 00:06:26.300 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:26.300 00:06:26.300 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.300 suites 1 1 n/a 0 0 00:06:26.300 tests 2 2 2 0 0 00:06:26.300 asserts 7 7 7 0 n/a 00:06:26.300 00:06:26.300 Elapsed time = 0.000 seconds 00:06:26.300 21:28:46 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:26.300 00:06:26.300 00:06:26.300 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.300 http://cunit.sourceforge.net/ 00:06:26.300 00:06:26.300 00:06:26.300 Suite: nvme 00:06:26.300 Test: test_nvme_ns_construct ...passed 00:06:26.300 Test: test_nvme_ns_uuid ...passed 00:06:26.300 Test: test_nvme_ns_csi ...passed 00:06:26.300 Test: test_nvme_ns_data ...passed 00:06:26.300 Test: test_nvme_ns_set_identify_data ...passed 00:06:26.300 Test: test_spdk_nvme_ns_get_values ...passed 00:06:26.300 Test: test_spdk_nvme_ns_is_active ...passed 00:06:26.300 Test: spdk_nvme_ns_supports ...passed 00:06:26.300 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:26.300 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:26.300 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:26.300 Test: test_nvme_ns_find_id_desc ...passed 00:06:26.301 00:06:26.301 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.301 suites 1 1 n/a 0 0 00:06:26.301 tests 12 12 12 0 0 00:06:26.301 asserts 83 83 83 0 n/a 00:06:26.301 00:06:26.301 Elapsed time = 0.001 seconds 00:06:26.301 21:28:46 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:26.301 00:06:26.301 00:06:26.301 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.301 http://cunit.sourceforge.net/ 00:06:26.301 00:06:26.301 00:06:26.301 Suite: nvme_ns_cmd 00:06:26.301 Test: split_test ...passed 00:06:26.301 Test: split_test2 ...passed 00:06:26.301 Test: split_test3 ...passed 00:06:26.301 Test: split_test4 ...passed 00:06:26.301 Test: test_nvme_ns_cmd_flush ...passed 00:06:26.301 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:26.301 Test: test_nvme_ns_cmd_copy ...passed 00:06:26.301 Test: test_io_flags ...[2024-12-06 21:28:46.723276] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:26.301 passed 00:06:26.301 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:26.301 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:26.301 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:26.301 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:26.301 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:26.301 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:26.301 Test: test_cmd_child_request ...passed 00:06:26.301 Test: test_nvme_ns_cmd_readv ...passed 00:06:26.301 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_writev ...[2024-12-06 21:28:46.724821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:26.301 passed 00:06:26.301 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_comparev ...passed 00:06:26.301 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:26.301 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:26.301 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:26.301 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:26.301 Test: test_spdk_nvme_ns_cmd_writev_ext ...[2024-12-06 21:28:46.727172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:26.301 passed 00:06:26.301 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-12-06 21:28:46.727339] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:26.301 passed 00:06:26.301 Test: test_nvme_ns_cmd_verify ...passed 00:06:26.301 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:26.301 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:26.301 00:06:26.301 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.301 suites 1 1 n/a 0 0 00:06:26.301 tests 32 32 32 0 0 00:06:26.301 asserts 550 550 550 0 n/a 00:06:26.301 00:06:26.301 Elapsed time = 0.006 seconds 00:06:26.301 21:28:46 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:26.301 00:06:26.301 00:06:26.301 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.301 http://cunit.sourceforge.net/ 00:06:26.301 00:06:26.301 00:06:26.301 Suite: nvme_ns_cmd 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:26.301 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:26.301 00:06:26.301 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.301 suites 1 1 n/a 0 0 00:06:26.301 tests 12 12 12 0 0 00:06:26.301 asserts 123 123 123 0 n/a 00:06:26.301 00:06:26.301 Elapsed time = 0.001 seconds 00:06:26.301 21:28:46 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:26.301 00:06:26.301 00:06:26.301 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.301 http://cunit.sourceforge.net/ 00:06:26.301 00:06:26.301 00:06:26.301 Suite: nvme_qpair 00:06:26.301 Test: test3 ...passed 00:06:26.301 Test: test_ctrlr_failed ...passed 00:06:26.301 Test: struct_packing ...passed 00:06:26.301 Test: test_nvme_qpair_process_completions ...[2024-12-06 21:28:46.789271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:26.301 [2024-12-06 21:28:46.789529] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:26.301 [2024-12-06 21:28:46.789610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:26.301 [2024-12-06 21:28:46.789653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:26.301 passed 00:06:26.301 Test: test_nvme_completion_is_retry ...passed 00:06:26.301 Test: test_get_status_string ...passed 00:06:26.301 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:06:26.301 Test: test_nvme_qpair_submit_request ...passed 00:06:26.301 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:26.301 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:26.301 Test: test_nvme_qpair_init_deinit ...passed 00:06:26.301 Test: test_nvme_get_sgl_print_info ...passed 00:06:26.301 00:06:26.301 [2024-12-06 21:28:46.790177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:26.301 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.301 suites 1 1 n/a 0 0 00:06:26.301 tests 12 12 12 0 0 00:06:26.301 asserts 154 154 154 0 n/a 00:06:26.301 00:06:26.301 Elapsed time = 0.001 seconds 00:06:26.562 21:28:46 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:26.562 00:06:26.562 00:06:26.562 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.562 http://cunit.sourceforge.net/ 00:06:26.562 00:06:26.562 00:06:26.562 Suite: nvme_pcie 00:06:26.562 Test: test_prp_list_append ...[2024-12-06 21:28:46.823731] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:26.562 [2024-12-06 21:28:46.824045] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:26.562 [2024-12-06 21:28:46.824108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:26.562 [2024-12-06 21:28:46.824318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:26.562 passed 00:06:26.562 Test: test_nvme_pcie_hotplug_monitor ...[2024-12-06 21:28:46.824415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:26.562 passed 00:06:26.562 Test: test_shadow_doorbell_update ...passed 00:06:26.562 Test: test_build_contig_hw_sgl_request ...passed 00:06:26.562 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:26.562 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:26.562 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:26.562 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_regs_get_set ...[2024-12-06 21:28:46.824761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:26.562 passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-12-06 21:28:46.824954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:26.562 [2024-12-06 21:28:46.825035] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:26.562 passed 00:06:26.562 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:26.562 00:06:26.562 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.562 suites 1 1 n/a 0 0 00:06:26.562 tests 14 14 14 0 0 00:06:26.562 asserts 235 235 235 0 n/a 00:06:26.562 00:06:26.562 Elapsed time = 0.002 seconds[2024-12-06 21:28:46.825105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:26.562 [2024-12-06 21:28:46.825157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:26.562 00:06:26.562 21:28:46 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:26.562 00:06:26.562 00:06:26.562 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.562 http://cunit.sourceforge.net/ 00:06:26.562 00:06:26.562 00:06:26.562 Suite: nvme_ns_cmd 00:06:26.562 Test: nvme_poll_group_create_test ...passed 00:06:26.562 Test: nvme_poll_group_add_remove_test ...passed 00:06:26.562 Test: nvme_poll_group_process_completions ...passed 00:06:26.562 Test: nvme_poll_group_destroy_test ...passed 00:06:26.562 Test: nvme_poll_group_get_free_stats ...passed 00:06:26.562 00:06:26.562 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.562 suites 1 1 n/a 0 0 00:06:26.562 tests 5 5 5 0 0 00:06:26.562 asserts 75 75 75 0 n/a 00:06:26.562 00:06:26.562 Elapsed time = 0.000 seconds 00:06:26.562 21:28:46 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:26.562 00:06:26.562 00:06:26.562 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.563 http://cunit.sourceforge.net/ 00:06:26.563 00:06:26.563 00:06:26.563 Suite: nvme_quirks 00:06:26.563 Test: test_nvme_quirks_striping ...passed 00:06:26.563 00:06:26.563 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.563 suites 1 1 n/a 0 0 00:06:26.563 tests 1 1 1 0 0 00:06:26.563 asserts 5 5 5 0 n/a 00:06:26.563 00:06:26.563 Elapsed time = 0.000 seconds 00:06:26.563 21:28:46 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:26.563 00:06:26.563 00:06:26.563 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.563 http://cunit.sourceforge.net/ 00:06:26.563 00:06:26.563 00:06:26.563 Suite: nvme_tcp 00:06:26.563 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:26.563 Test: test_nvme_tcp_build_iovs ...passed 00:06:26.563 Test: test_nvme_tcp_build_sgl_request ...[2024-12-06 21:28:46.909592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x75325fc0d2e0, and the iovcnt=16, remaining_size=28672 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:26.563 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:26.563 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:26.563 Test: test_nvme_tcp_req_get ...passed 00:06:26.563 Test: test_nvme_tcp_req_init ...passed 00:06:26.563 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:26.563 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:26.563 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:06:26.563 Test: test_nvme_tcp_alloc_reqs ...[2024-12-06 21:28:46.910193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325f709030 is same with the state(6) to be set 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:06:26.563 Test: test_nvme_tcp_pdu_ch_handle ...[2024-12-06 21:28:46.910576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fb09070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.910644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1108:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x75325fa0a6e0 00:06:26.563 [2024-12-06 21:28:46.910690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1167:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:26.563 [2024-12-06 21:28:46.910726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.910757] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1118:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:26.563 [2024-12-06 21:28:46.910797] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.910831] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:26.563 [2024-12-06 21:28:46.910867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.910913] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.910957] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_qpair_connect_sock ...[2024-12-06 21:28:46.910989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.911027] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.911060] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fa0a070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.911274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:26.563 [2024-12-06 21:28:46.911329] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_qpair_icreq_send ...[2024-12-06 21:28:46.911663] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2251:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:06:26.563 Test: test_nvme_tcp_icresp_handle ...[2024-12-06 21:28:46.911776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x75325fa0b540): PDU Sequence Error 00:06:26.563 [2024-12-06 21:28:46.911835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1508:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:26.563 [2024-12-06 21:28:46.911872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1515:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:26.563 [2024-12-06 21:28:46.911902] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fb0d070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.911927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1524:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:26.563 [2024-12-06 21:28:46.911963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fb0d070 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.911997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fb0d070 is same with the state(0) to be set 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:26.563 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-12-06 21:28:46.912085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1282:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x75325fa0c540): PDU Sequence Error 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-12-06 21:28:46.912174] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1585:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x75325fb0f200 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-12-06 21:28:46.912364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 353:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x75325fc25480, errno=0, rc=0 00:06:26.563 [2024-12-06 21:28:46.912406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fc25480 is same with the state(5) to be set 00:06:26.563 [2024-12-06 21:28:46.912461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x75325fc25480 is same with the state(5) to be set 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-12-06 21:28:46.912536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75325fc25480 (0): Success 00:06:26.563 [2024-12-06 21:28:46.912578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x75325fc25480 (0): Success 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-12-06 21:28:47.022729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:26.563 [2024-12-06 21:28:47.022834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:06:26.563 Test: test_nvme_tcp_ctrlr_construct ...[2024-12-06 21:28:47.023225] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.563 [2024-12-06 21:28:47.023278] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2849:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.563 [2024-12-06 21:28:47.023501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2422:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:26.563 [2024-12-06 21:28:47.023552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:26.563 [2024-12-06 21:28:47.023643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2239:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:26.563 [2024-12-06 21:28:47.023693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:26.563 passed 00:06:26.563 Test: test_nvme_tcp_qpair_submit_request ...[2024-12-06 21:28:47.023808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513000001540 with addr=192.168.1.78, port=23 00:06:26.563 [2024-12-06 21:28:47.023880] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:26.563 [2024-12-06 21:28:47.024051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 783:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x513000001a80, and the iovcnt=1, remaining_size=1024 00:06:26.563 passed 00:06:26.563 00:06:26.563 [2024-12-06 21:28:47.024111] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 961:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:26.563 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.563 suites 1 1 n/a 0 0 00:06:26.563 tests 27 27 27 0 0 00:06:26.563 asserts 624 624 624 0 n/a 00:06:26.563 00:06:26.563 Elapsed time = 0.115 seconds 00:06:26.563 21:28:47 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:26.823 00:06:26.823 00:06:26.823 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.823 http://cunit.sourceforge.net/ 00:06:26.823 00:06:26.823 00:06:26.823 Suite: nvme_transport 00:06:26.823 Test: test_nvme_get_transport ...passed 00:06:26.823 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:26.823 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:26.823 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:26.823 Test: test_ctrlr_get_memory_domains ...passed 00:06:26.823 00:06:26.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.823 suites 1 1 n/a 0 0 00:06:26.823 tests 5 5 5 0 0 00:06:26.823 asserts 28 28 28 0 n/a 00:06:26.823 00:06:26.823 Elapsed time = 0.000 seconds 00:06:26.823 21:28:47 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:26.823 00:06:26.823 00:06:26.823 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.823 http://cunit.sourceforge.net/ 00:06:26.823 00:06:26.823 00:06:26.823 Suite: nvme_io_msg 00:06:26.823 Test: test_nvme_io_msg_send ...passed 00:06:26.823 Test: test_nvme_io_msg_process ...passed 00:06:26.823 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:26.823 00:06:26.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.823 suites 1 1 n/a 0 0 00:06:26.823 tests 3 3 3 0 0 00:06:26.823 asserts 56 56 56 0 n/a 00:06:26.823 00:06:26.823 Elapsed time = 0.000 seconds 00:06:26.823 21:28:47 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:26.823 00:06:26.823 00:06:26.823 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.823 http://cunit.sourceforge.net/ 00:06:26.823 00:06:26.823 00:06:26.823 Suite: nvme_pcie_common 00:06:26.823 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-12-06 21:28:47.123003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:26.823 passed 00:06:26.823 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:26.823 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:26.823 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-12-06 21:28:47.123856] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:26.823 [2024-12-06 21:28:47.123914] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:26.823 [2024-12-06 21:28:47.123968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERpassed 00:06:26.823 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...ROR*: Failed to send request to create_io_cq 00:06:26.823 passed 00:06:26.823 Test: test_nvme_pcie_poll_group_get_stats ...[2024-12-06 21:28:47.124468] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.823 [2024-12-06 21:28:47.124527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1791:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:26.823 passed 00:06:26.823 00:06:26.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.823 suites 1 1 n/a 0 0 00:06:26.823 tests 6 6 6 0 0 00:06:26.823 asserts 148 148 148 0 n/a 00:06:26.823 00:06:26.823 Elapsed time = 0.002 seconds 00:06:26.823 21:28:47 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:26.823 00:06:26.823 00:06:26.823 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.823 http://cunit.sourceforge.net/ 00:06:26.823 00:06:26.823 00:06:26.823 Suite: nvme_fabric 00:06:26.823 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:26.823 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:26.823 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:26.823 Test: test_nvme_fabric_discover_probe ...passed 00:06:26.823 Test: test_nvme_fabric_qpair_connect ...[2024-12-06 21:28:47.150809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:26.823 passed 00:06:26.823 00:06:26.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.823 suites 1 1 n/a 0 0 00:06:26.823 tests 5 5 5 0 0 00:06:26.823 asserts 60 60 60 0 n/a 00:06:26.823 00:06:26.823 Elapsed time = 0.001 seconds 00:06:26.823 21:28:47 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:26.823 00:06:26.823 00:06:26.823 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.823 http://cunit.sourceforge.net/ 00:06:26.823 00:06:26.823 00:06:26.823 Suite: nvme_opal 00:06:26.823 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:26.823 Test: test_opal_add_short_atom_header ...passed 00:06:26.823 00:06:26.823 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.823 suites 1 1 n/a 0 0 00:06:26.823 tests 2 2 2 0 0 00:06:26.823 asserts 22 22 22 0 n/a 00:06:26.823 00:06:26.823 Elapsed time = 0.000 seconds 00:06:26.823 [2024-12-06 21:28:47.177393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:26.823 00:06:26.823 real 0m1.240s 00:06:26.823 user 0m0.646s 00:06:26.823 sys 0m0.447s 00:06:26.823 21:28:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:26.823 21:28:47 -- common/autotest_common.sh@10 -- # set +x 00:06:26.823 ************************************ 00:06:26.823 END TEST unittest_nvme 00:06:26.823 ************************************ 00:06:26.824 21:28:47 -- unit/unittest.sh@223 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:26.824 21:28:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.824 21:28:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.824 21:28:47 -- common/autotest_common.sh@10 -- # set +x 00:06:26.824 ************************************ 00:06:26.824 START TEST unittest_log 00:06:26.824 ************************************ 00:06:26.824 21:28:47 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:26.824 00:06:26.824 00:06:26.824 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.824 http://cunit.sourceforge.net/ 00:06:26.824 00:06:26.824 00:06:26.824 Suite: log 00:06:26.824 Test: log_test ...[2024-12-06 21:28:47.256543] log_ut.c: 54:log_test: *WARNING*: log warning unit test 00:06:26.824 [2024-12-06 21:28:47.256727] log_ut.c: 55:log_test: *DEBUG*: log test 00:06:26.824 log dump test: 00:06:26.824 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:26.824 passed 00:06:26.824 Test: deprecation ...spdk dump test: 00:06:26.824 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:26.824 spdk dump test: 00:06:26.824 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:26.824 00000010 65 20 63 68 61 72 73 e chars 00:06:27.761 passed 00:06:27.761 00:06:27.761 Run Summary: Type Total Ran Passed Failed Inactive 00:06:27.761 suites 1 1 n/a 0 0 00:06:27.761 tests 2 2 2 0 0 00:06:27.761 asserts 73 73 73 0 n/a 00:06:27.761 00:06:27.761 Elapsed time = 0.001 seconds 00:06:28.021 00:06:28.022 real 0m1.029s 00:06:28.022 user 0m0.015s 00:06:28.022 sys 0m0.013s 00:06:28.022 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.022 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 END TEST unittest_log 00:06:28.022 ************************************ 00:06:28.022 21:28:48 -- unit/unittest.sh@224 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:28.022 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.022 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.022 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 START TEST unittest_lvol 00:06:28.022 ************************************ 00:06:28.022 21:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:28.022 00:06:28.022 00:06:28.022 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.022 http://cunit.sourceforge.net/ 00:06:28.022 00:06:28.022 00:06:28.022 Suite: lvol 00:06:28.022 Test: lvs_init_unload_success ...[2024-12-06 21:28:48.342777] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:28.022 passed 00:06:28.022 Test: lvs_init_destroy_success ...[2024-12-06 21:28:48.343376] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:28.022 passed 00:06:28.022 Test: lvs_init_opts_success ...passed 00:06:28.022 Test: lvs_unload_lvs_is_null_fail ...passed 00:06:28.022 Test: lvs_names ...[2024-12-06 21:28:48.343618] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:28.022 [2024-12-06 21:28:48.343685] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:28.022 [2024-12-06 21:28:48.343738] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:28.022 [2024-12-06 21:28:48.343899] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:28.022 passed 00:06:28.022 Test: lvol_create_destroy_success ...passed 00:06:28.022 Test: lvol_create_fail ...[2024-12-06 21:28:48.344399] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:28.022 passed 00:06:28.022 Test: lvol_destroy_fail ...[2024-12-06 21:28:48.344490] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:28.022 [2024-12-06 21:28:48.344752] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:28.022 passed 00:06:28.022 Test: lvol_close ...[2024-12-06 21:28:48.344932] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:28.022 passed 00:06:28.022 Test: lvol_resize ...[2024-12-06 21:28:48.344984] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:28.022 passed 00:06:28.022 Test: lvol_set_read_only ...passed 00:06:28.022 Test: test_lvs_load ...[2024-12-06 21:28:48.345618] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:28.022 passed 00:06:28.022 Test: lvols_load ...[2024-12-06 21:28:48.345670] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:28.022 [2024-12-06 21:28:48.345864] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:28.022 [2024-12-06 21:28:48.345981] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:28.022 passed 00:06:28.022 Test: lvol_open ...passed 00:06:28.022 Test: lvol_snapshot ...passed 00:06:28.022 Test: lvol_snapshot_fail ...[2024-12-06 21:28:48.346600] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:28.022 passed 00:06:28.022 Test: lvol_clone ...passed 00:06:28.022 Test: lvol_clone_fail ...[2024-12-06 21:28:48.347019] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:28.022 passed 00:06:28.022 Test: lvol_iter_clones ...passed 00:06:28.022 Test: lvol_refcnt ...[2024-12-06 21:28:48.347383] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 0dad23ac-6151-45e0-9988-003d730650c7 because it is still open 00:06:28.022 passed 00:06:28.022 Test: lvol_names ...[2024-12-06 21:28:48.347536] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:28.022 [2024-12-06 21:28:48.347595] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:28.022 [2024-12-06 21:28:48.347747] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:28.022 passed 00:06:28.022 Test: lvol_create_thin_provisioned ...passed 00:06:28.022 Test: lvol_rename ...[2024-12-06 21:28:48.348108] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:28.022 passed 00:06:28.022 Test: lvs_rename ...[2024-12-06 21:28:48.348184] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:28.022 [2024-12-06 21:28:48.348382] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:28.022 passed 00:06:28.022 Test: lvol_inflate ...[2024-12-06 21:28:48.348552] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:28.022 passed 00:06:28.022 Test: lvol_decouple_parent ...[2024-12-06 21:28:48.348830] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:28.022 passed 00:06:28.022 Test: lvol_get_xattr ...passed 00:06:28.022 Test: lvol_esnap_reload ...passed 00:06:28.022 Test: lvol_esnap_create_bad_args ...[2024-12-06 21:28:48.349225] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:28.022 [2024-12-06 21:28:48.349258] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:28.022 [2024-12-06 21:28:48.349295] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:28.022 [2024-12-06 21:28:48.349333] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:28.022 [2024-12-06 21:28:48.349428] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:28.022 passed 00:06:28.022 Test: lvol_esnap_create_delete ...passed 00:06:28.022 Test: lvol_esnap_load_esnaps ...passed 00:06:28.022 Test: lvol_esnap_missing ...[2024-12-06 21:28:48.349698] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:28.022 [2024-12-06 21:28:48.349861] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:28.022 [2024-12-06 21:28:48.349899] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:28.022 passed 00:06:28.022 Test: lvol_esnap_hotplug ... 00:06:28.022 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:28.022 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:28.022 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:28.022 [2024-12-06 21:28:48.350478] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 00da4daf-f8cd-48b8-b831-29203c871526: failed to create esnap bs_dev: error -12 00:06:28.022 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:28.022 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:28.022 [2024-12-06 21:28:48.350667] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 807bebb6-0193-48eb-8f25-072d4d2b661b: failed to create esnap bs_dev: error -12 00:06:28.022 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:28.022 [2024-12-06 21:28:48.350784] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 5e1dffa5-aadd-42fd-9f24-099cecf6c3d1: failed to create esnap bs_dev: error -12 00:06:28.022 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:28.022 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:28.022 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:28.022 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:28.022 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:28.022 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:28.022 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:28.022 passed 00:06:28.022 Test: lvol_get_by ...passed 00:06:28.022 00:06:28.022 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.022 suites 1 1 n/a 0 0 00:06:28.022 tests 34 34 34 0 0 00:06:28.022 asserts 1439 1439 1439 0 n/a 00:06:28.022 00:06:28.022 Elapsed time = 0.009 seconds 00:06:28.022 00:06:28.022 real 0m0.046s 00:06:28.022 user 0m0.023s 00:06:28.022 sys 0m0.022s 00:06:28.022 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.022 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 END TEST unittest_lvol 00:06:28.022 ************************************ 00:06:28.022 21:28:48 -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.022 21:28:48 -- unit/unittest.sh@226 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:28.022 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.022 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.022 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.022 ************************************ 00:06:28.022 START TEST unittest_nvme_rdma 00:06:28.022 ************************************ 00:06:28.022 21:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:28.022 00:06:28.022 00:06:28.022 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.022 http://cunit.sourceforge.net/ 00:06:28.022 00:06:28.022 00:06:28.022 Suite: nvme_rdma 00:06:28.023 Test: test_nvme_rdma_build_sgl_request ...[2024-12-06 21:28:48.436241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:28.023 Test: test_nvme_rdma_build_contig_request ...[2024-12-06 21:28:48.436500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1628:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:28.023 [2024-12-06 21:28:48.436553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1684:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:28.023 Test: test_nvme_rdma_create_reqs ...[2024-12-06 21:28:48.436656] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1565:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:28.023 [2024-12-06 21:28:48.436779] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1007:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_create_rsps ...[2024-12-06 21:28:48.437136] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 925:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:06:28.023 Test: test_nvme_rdma_poller_create ...[2024-12-06 21:28:48.437332] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:28.023 [2024-12-06 21:28:48.437366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1822:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:06:28.023 Test: test_nvme_rdma_ctrlr_construct ...[2024-12-06 21:28:48.437546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 526:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:28.023 Test: test_nvme_rdma_req_init ...passed 00:06:28.023 Test: test_nvme_rdma_validate_cm_event ...[2024-12-06 21:28:48.437900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:28.023 [2024-12-06 21:28:48.437954] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_qpair_init ...passed 00:06:28.023 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:28.023 Test: test_nvme_rdma_memory_domain ...passed 00:06:28.023 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:28.023 Test: test_rdma_get_memory_translation ...[2024-12-06 21:28:48.438180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 352:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:28.023 passed 00:06:28.023 Test: test_get_rdma_qpair_from_wc ...passed 00:06:28.023 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:28.023 Test: test_nvme_rdma_poll_group_get_stats ...[2024-12-06 21:28:48.438282] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1444:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:28.023 [2024-12-06 21:28:48.438310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1455:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:28.023 [2024-12-06 21:28:48.438411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:28.023 passed 00:06:28.023 Test: test_nvme_rdma_qpair_set_poller ...[2024-12-06 21:28:48.438472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3239:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:28.023 [2024-12-06 21:28:48.438626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:28.023 [2024-12-06 21:28:48.438674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:28.023 [2024-12-06 21:28:48.438710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7e688620a030 on poll group 0x50b000000040 00:06:28.023 [2024-12-06 21:28:48.438763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2972:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:28.023 passed[2024-12-06 21:28:48.438793] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3018:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:28.023 [2024-12-06 21:28:48.438820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 723:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7e688620a030 on poll group 0x50b000000040 00:06:28.023 [2024-12-06 21:28:48.438888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 701:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:28.023 00:06:28.023 00:06:28.023 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.023 suites 1 1 n/a 0 0 00:06:28.023 tests 22 22 22 0 0 00:06:28.023 asserts 412 412 412 0 n/a 00:06:28.023 00:06:28.023 Elapsed time = 0.003 seconds 00:06:28.023 00:06:28.023 real 0m0.034s 00:06:28.023 user 0m0.017s 00:06:28.023 sys 0m0.017s 00:06:28.023 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.023 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.023 ************************************ 00:06:28.023 END TEST unittest_nvme_rdma 00:06:28.023 ************************************ 00:06:28.023 21:28:48 -- unit/unittest.sh@227 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:28.023 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.023 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.023 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.023 ************************************ 00:06:28.023 START TEST unittest_nvmf_transport 00:06:28.023 ************************************ 00:06:28.023 21:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:28.283 00:06:28.283 00:06:28.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.283 http://cunit.sourceforge.net/ 00:06:28.283 00:06:28.283 00:06:28.283 Suite: nvmf 00:06:28.283 Test: test_spdk_nvmf_transport_create ...[2024-12-06 21:28:48.525697] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:28.283 [2024-12-06 21:28:48.526046] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:28.283 [2024-12-06 21:28:48.526117] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:28.283 passed 00:06:28.283 Test: test_nvmf_transport_poll_group_create ...[2024-12-06 21:28:48.526208] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:28.283 passed 00:06:28.283 Test: test_spdk_nvmf_transport_opts_init ...[2024-12-06 21:28:48.526584] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:28.283 passed 00:06:28.283 Test: test_spdk_nvmf_transport_listen_ext ...[2024-12-06 21:28:48.526636] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:28.283 [2024-12-06 21:28:48.526676] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:28.283 passed 00:06:28.283 00:06:28.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.283 suites 1 1 n/a 0 0 00:06:28.283 tests 4 4 4 0 0 00:06:28.283 asserts 49 49 49 0 n/a 00:06:28.283 00:06:28.283 Elapsed time = 0.001 seconds 00:06:28.283 00:06:28.283 real 0m0.041s 00:06:28.283 user 0m0.018s 00:06:28.283 sys 0m0.023s 00:06:28.283 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 END TEST unittest_nvmf_transport 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- unit/unittest.sh@228 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:28.283 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.283 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 START TEST unittest_rdma 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:28.283 00:06:28.283 00:06:28.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.283 http://cunit.sourceforge.net/ 00:06:28.283 00:06:28.283 00:06:28.283 Suite: rdma_common 00:06:28.283 Test: test_spdk_rdma_pd ...[2024-12-06 21:28:48.619274] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:28.283 passed 00:06:28.283 00:06:28.283 [2024-12-06 21:28:48.619570] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:28.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.283 suites 1 1 n/a 0 0 00:06:28.283 tests 1 1 1 0 0 00:06:28.283 asserts 31 31 31 0 n/a 00:06:28.283 00:06:28.283 Elapsed time = 0.001 seconds 00:06:28.283 00:06:28.283 real 0m0.034s 00:06:28.283 user 0m0.018s 00:06:28.283 sys 0m0.015s 00:06:28.283 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 END TEST unittest_rdma 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- unit/unittest.sh@231 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:28.283 21:28:48 -- unit/unittest.sh@232 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:28.283 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.283 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 START TEST unittest_nvme_cuse 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:28.283 00:06:28.283 00:06:28.283 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.283 http://cunit.sourceforge.net/ 00:06:28.283 00:06:28.283 00:06:28.283 Suite: nvme_cuse 00:06:28.283 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:28.283 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:28.283 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:28.283 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:28.283 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:28.283 Test: test_cuse_nvme_submit_io ...[2024-12-06 21:28:48.703057] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:28.283 passed 00:06:28.283 Test: test_cuse_nvme_reset ...passed 00:06:28.283 Test: test_nvme_cuse_stop ...[2024-12-06 21:28:48.703295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:28.283 passed 00:06:28.283 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:28.283 00:06:28.283 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.283 suites 1 1 n/a 0 0 00:06:28.283 tests 9 9 9 0 0 00:06:28.283 asserts 121 121 121 0 n/a 00:06:28.283 00:06:28.283 Elapsed time = 0.002 seconds 00:06:28.283 00:06:28.283 real 0m0.033s 00:06:28.283 user 0m0.015s 00:06:28.283 sys 0m0.019s 00:06:28.283 21:28:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 END TEST unittest_nvme_cuse 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- unit/unittest.sh@235 -- # run_test unittest_nvmf unittest_nvmf 00:06:28.283 21:28:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.283 21:28:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.283 21:28:48 -- common/autotest_common.sh@10 -- # set +x 00:06:28.283 ************************************ 00:06:28.283 START TEST unittest_nvmf 00:06:28.283 ************************************ 00:06:28.283 21:28:48 -- common/autotest_common.sh@1114 -- # unittest_nvmf 00:06:28.283 21:28:48 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:28.544 00:06:28.544 00:06:28.544 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.544 http://cunit.sourceforge.net/ 00:06:28.544 00:06:28.544 00:06:28.544 Suite: nvmf 00:06:28.544 Test: test_get_log_page ...[2024-12-06 21:28:48.789583] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2504:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:28.544 passed 00:06:28.544 Test: test_process_fabrics_cmd ...passed 00:06:28.544 Test: test_connect ...[2024-12-06 21:28:48.790427] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 905:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:28.544 [2024-12-06 21:28:48.790513] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 768:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:28.544 [2024-12-06 21:28:48.790559] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 944:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:28.544 [2024-12-06 21:28:48.790594] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:28.544 [2024-12-06 21:28:48.790634] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 779:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:28.544 [2024-12-06 21:28:48.790676] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 786:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:28.544 [2024-12-06 21:28:48.790717] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 792:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:28.544 [2024-12-06 21:28:48.790754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:28.544 [2024-12-06 21:28:48.790871] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:28.544 [2024-12-06 21:28:48.790964] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 587:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:28.544 [2024-12-06 21:28:48.791247] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 593:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:28.544 [2024-12-06 21:28:48.791381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:28.544 [2024-12-06 21:28:48.791486] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 606:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:28.544 [2024-12-06 21:28:48.791553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 623:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:28.544 [2024-12-06 21:28:48.791682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 232:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:28.544 [2024-12-06 21:28:48.791806] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:28.544 passed 00:06:28.544 Test: test_get_ns_id_desc_list ...passed 00:06:28.544 Test: test_identify_ns ...[2024-12-06 21:28:48.792148] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:28.544 [2024-12-06 21:28:48.792361] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:28.544 passed 00:06:28.544 Test: test_identify_ns_iocs_specific ...[2024-12-06 21:28:48.792519] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:28.544 [2024-12-06 21:28:48.792669] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:28.544 [2024-12-06 21:28:48.792952] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:28.544 passed 00:06:28.544 Test: test_reservation_write_exclusive ...passed 00:06:28.544 Test: test_reservation_exclusive_access ...passed 00:06:28.544 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:06:28.544 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:28.544 Test: test_reservation_notification_log_page ...passed 00:06:28.544 Test: test_get_dif_ctx ...passed 00:06:28.544 Test: test_set_get_features ...[2024-12-06 21:28:48.793526] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:28.544 passed 00:06:28.544 Test: test_identify_ctrlr ...passed 00:06:28.544 Test: test_identify_ctrlr_iocs_specific ...passed 00:06:28.544 Test: test_custom_admin_cmd ...passed 00:06:28.544 Test: test_fused_compare_and_write ...passed 00:06:28.544 Test: test_multi_async_event_reqs ...passed 00:06:28.544 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:28.544 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:28.544 Test: test_multi_async_events ...passed 00:06:28.544 Test: test_rae ...passed 00:06:28.544 Test: test_nvmf_ctrlr_create_destruct ...[2024-12-06 21:28:48.793585] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1534:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:28.544 [2024-12-06 21:28:48.793616] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1545:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:28.544 [2024-12-06 21:28:48.793663] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1621:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:28.544 [2024-12-06 21:28:48.794181] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4105:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:28.544 [2024-12-06 21:28:48.794247] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4094:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:28.544 [2024-12-06 21:28:48.794279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4112:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:28.544 passed 00:06:28.544 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:28.544 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:28.544 Test: test_zcopy_read ...passed 00:06:28.544 Test: test_zcopy_write ...passed 00:06:28.544 Test: test_nvmf_property_set ...passed 00:06:28.544 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:06:28.544 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:06:28.544 00:06:28.544 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.544 suites 1 1 n/a 0 0 00:06:28.544 tests 30 30 30 0 0 00:06:28.544 asserts 885 885 885 0 n/a 00:06:28.544 00:06:28.544 Elapsed time = 0.006 seconds 00:06:28.544 [2024-12-06 21:28:48.794869] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4232:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:28.544 [2024-12-06 21:28:48.795077] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:28.544 [2024-12-06 21:28:48.795127] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1832:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:28.544 [2024-12-06 21:28:48.795155] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1855:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:28.544 [2024-12-06 21:28:48.795183] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1861:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:28.544 [2024-12-06 21:28:48.795219] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1873:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:28.544 21:28:48 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:28.544 00:06:28.544 00:06:28.544 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.544 http://cunit.sourceforge.net/ 00:06:28.544 00:06:28.544 00:06:28.544 Suite: nvmf 00:06:28.544 Test: test_get_rw_params ...passed 00:06:28.544 Test: test_lba_in_range ...passed 00:06:28.544 Test: test_get_dif_ctx ...passed 00:06:28.544 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:28.544 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:06:28.544 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:06:28.544 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:06:28.544 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:28.544 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:28.544 00:06:28.544 [2024-12-06 21:28:48.825004] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:28.544 [2024-12-06 21:28:48.825207] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:28.544 [2024-12-06 21:28:48.825262] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:28.544 [2024-12-06 21:28:48.825318] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:28.544 [2024-12-06 21:28:48.825354] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:28.544 [2024-12-06 21:28:48.825397] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:28.544 [2024-12-06 21:28:48.825435] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:28.544 [2024-12-06 21:28:48.825528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:28.544 [2024-12-06 21:28:48.825561] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:28.544 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.544 suites 1 1 n/a 0 0 00:06:28.544 tests 9 9 9 0 0 00:06:28.544 asserts 157 157 157 0 n/a 00:06:28.544 00:06:28.544 Elapsed time = 0.001 seconds 00:06:28.544 21:28:48 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:28.544 00:06:28.544 00:06:28.544 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.544 http://cunit.sourceforge.net/ 00:06:28.544 00:06:28.544 00:06:28.544 Suite: nvmf 00:06:28.544 Test: test_discovery_log ...passed 00:06:28.545 Test: test_discovery_log_with_filters ...passed 00:06:28.545 00:06:28.545 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.545 suites 1 1 n/a 0 0 00:06:28.545 tests 2 2 2 0 0 00:06:28.545 asserts 238 238 238 0 n/a 00:06:28.545 00:06:28.545 Elapsed time = 0.003 seconds 00:06:28.545 21:28:48 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:28.545 00:06:28.545 00:06:28.545 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.545 http://cunit.sourceforge.net/ 00:06:28.545 00:06:28.545 00:06:28.545 Suite: nvmf 00:06:28.545 Test: nvmf_test_create_subsystem ...[2024-12-06 21:28:48.898661] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:28.545 [2024-12-06 21:28:48.898979] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:28.545 [2024-12-06 21:28:48.899031] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:28.545 [2024-12-06 21:28:48.899058] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:28.545 [2024-12-06 21:28:48.899093] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:28.545 [2024-12-06 21:28:48.899117] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:28.545 [2024-12-06 21:28:48.899213] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:28.545 [2024-12-06 21:28:48.899324] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:28.545 passed 00:06:28.545 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:06:28.545 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:28.545 Test: test_reservation_register ...passed 00:06:28.545 Test: test_reservation_register_with_ptpl ...[2024-12-06 21:28:48.899425] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:28.545 [2024-12-06 21:28:48.899482] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:28.545 [2024-12-06 21:28:48.899515] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:28.545 [2024-12-06 21:28:48.899730] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:28.545 [2024-12-06 21:28:48.899781] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1774:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:28.545 [2024-12-06 21:28:48.900033] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 [2024-12-06 21:28:48.900148] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2881:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:28.545 passed 00:06:28.545 Test: test_reservation_acquire_preempt_1 ...passed 00:06:28.545 Test: test_reservation_acquire_release_with_ptpl ...[2024-12-06 21:28:48.901602] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 passed 00:06:28.545 Test: test_reservation_release ...passed 00:06:28.545 Test: test_reservation_unregister_notification ...[2024-12-06 21:28:48.903691] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 passed 00:06:28.545 Test: test_reservation_release_notification ...[2024-12-06 21:28:48.903938] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 [2024-12-06 21:28:48.904204] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 passed 00:06:28.545 Test: test_reservation_release_notification_write_exclusive ...[2024-12-06 21:28:48.904389] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 passed 00:06:28.545 Test: test_reservation_clear_notification ...passed 00:06:28.545 Test: test_reservation_preempt_notification ...passed 00:06:28.545 Test: test_spdk_nvmf_ns_event ...[2024-12-06 21:28:48.904842] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 [2024-12-06 21:28:48.905012] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2823:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:28.545 passed 00:06:28.545 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:28.545 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:28.545 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:06:28.545 Test: test_nvmf_ns_reservation_report ...passed 00:06:28.545 Test: test_nvmf_nqn_is_valid ...passed 00:06:28.545 Test: test_nvmf_ns_reservation_restore ...passed 00:06:28.545 Test: test_nvmf_subsystem_state_change ...passed[2024-12-06 21:28:48.905760] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:28.545 [2024-12-06 21:28:48.905823] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:28.545 [2024-12-06 21:28:48.905947] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3186:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:28.545 [2024-12-06 21:28:48.906015] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:28.545 [2024-12-06 21:28:48.906039] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:f4cbe72a-8a70-47d9-9275-cb996568ead": uuid is not the correct length 00:06:28.545 [2024-12-06 21:28:48.906057] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:28.545 [2024-12-06 21:28:48.906163] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2380:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:28.545 00:06:28.545 Test: test_nvmf_reservation_custom_ops ...passed 00:06:28.545 00:06:28.545 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.545 suites 1 1 n/a 0 0 00:06:28.545 tests 22 22 22 0 0 00:06:28.545 asserts 407 407 407 0 n/a 00:06:28.545 00:06:28.545 Elapsed time = 0.008 seconds 00:06:28.545 21:28:48 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:28.545 00:06:28.545 00:06:28.545 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.545 http://cunit.sourceforge.net/ 00:06:28.545 00:06:28.545 00:06:28.545 Suite: nvmf 00:06:28.545 Test: test_nvmf_tcp_create ...passed 00:06:28.545 Test: test_nvmf_tcp_destroy ...[2024-12-06 21:28:48.969108] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 732:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:28.545 passed 00:06:28.545 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:28.805 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:28.805 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:28.805 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:28.805 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:28.805 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-12-06 21:28:49.084152] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.805 [2024-12-06 21:28:49.084450] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50b020 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.084701] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50b020 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.084953] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.805 [2024-12-06 21:28:49.085152] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv spassed 00:06:28.805 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:28.805 Test: test_nvmf_tcp_icreq_handle ...tate of tqpair=0x7079df50b020 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.085494] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:28.805 [2024-12-06 21:28:49.085550] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.805 [2024-12-06 21:28:49.085590] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50d180 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.085615] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2091:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:28.805 passed 00:06:28.805 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:28.805 Test: test_nvmf_tcp_invalid_sgl ...[2024-12-06 21:28:49.085649] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50d180 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.085680] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.805 [2024-12-06 21:28:49.085714] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50d180 is same with the state(5) to be set 00:06:28.805 [2024-12-06 21:28:49.085757] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:28.805 [2024-12-06 21:28:49.085799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df50d180 is same with the state(5) to be set 00:06:28.805 passed 00:06:28.805 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-12-06 21:28:49.085892] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2486:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:28.805 [2024-12-06 21:28:49.085929] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.085963] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df5116a0 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086011] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2218:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7079df40c8c0 00:06:28.806 [2024-12-06 21:28:49.086053] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086080] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086113] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2275:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7079df40c020 00:06:28.806 [2024-12-06 21:28:49.086152] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086176] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086214] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2228:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:28.806 [2024-12-06 21:28:49.086241] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086276] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086306] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2267:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:28.806 [2024-12-06 21:28:49.086328] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086365] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086405] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086483] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086521] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086556] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086580] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086618] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086651] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086689] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 passed 00:06:28.806 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-12-06 21:28:49.086726] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 [2024-12-06 21:28:49.086764] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1072:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:28.806 [2024-12-06 21:28:49.086797] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1576:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7079df40c020 is same with the state(5) to be set 00:06:28.806 passed 00:06:28.806 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:06:28.806 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-12-06 21:28:49.118071] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:28.806 [2024-12-06 21:28:49.118135] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:28.806 passed 00:06:28.806 Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-12-06 21:28:49.119103] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:28.806 [2024-12-06 21:28:49.119171] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:28.806 passed 00:06:28.806 00:06:28.806 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.806 suites 1 1 n/a 0 0 00:06:28.806 tests 17 17 17 0 0 00:06:28.806 asserts 222 222 222 0 n/a 00:06:28.806 00:06:28.806 Elapsed time = 0.170 seconds 00:06:28.806 [2024-12-06 21:28:49.119861] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:28.806 [2024-12-06 21:28:49.119910] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:28.806 21:28:49 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:28.806 00:06:28.806 00:06:28.806 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.806 http://cunit.sourceforge.net/ 00:06:28.806 00:06:28.806 00:06:28.806 Suite: nvmf 00:06:28.806 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:28.806 00:06:28.806 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.806 suites 1 1 n/a 0 0 00:06:28.806 tests 1 1 1 0 0 00:06:28.806 asserts 17 17 17 0 n/a 00:06:28.806 00:06:28.806 Elapsed time = 0.030 seconds 00:06:28.806 00:06:28.806 real 0m0.525s 00:06:28.806 user 0m0.216s 00:06:28.806 sys 0m0.303s 00:06:28.806 21:28:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:28.806 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:28.806 ************************************ 00:06:28.806 END TEST unittest_nvmf 00:06:28.806 ************************************ 00:06:29.065 21:28:49 -- unit/unittest.sh@236 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:29.065 21:28:49 -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:29.065 21:28:49 -- unit/unittest.sh@242 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:29.066 21:28:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.066 21:28:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.066 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.066 ************************************ 00:06:29.066 START TEST unittest_nvmf_rdma 00:06:29.066 ************************************ 00:06:29.066 21:28:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:29.066 00:06:29.066 00:06:29.066 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.066 http://cunit.sourceforge.net/ 00:06:29.066 00:06:29.066 00:06:29.066 Suite: nvmf 00:06:29.066 Test: test_spdk_nvmf_rdma_request_parse_sgl ...passed 00:06:29.066 Test: test_spdk_nvmf_rdma_request_process ...[2024-12-06 21:28:49.369957] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:29.066 [2024-12-06 21:28:49.370194] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:29.066 [2024-12-06 21:28:49.370248] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:29.066 passed 00:06:29.066 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:29.066 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:29.066 Test: test_nvmf_rdma_opts_init ...passed 00:06:29.066 Test: test_nvmf_rdma_request_free_data ...passed 00:06:29.066 Test: test_nvmf_rdma_update_ibv_state ...passed 00:06:29.066 Test: test_nvmf_rdma_resources_create ...[2024-12-06 21:28:49.372031] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 616:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:29.066 [2024-12-06 21:28:49.372111] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 627:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:29.066 passed 00:06:29.066 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:29.066 Test: test_nvmf_rdma_resize_cq ...[2024-12-06 21:28:49.373520] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1008:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:29.066 Using CQ of insufficient size may lead to CQ overrun 00:06:29.066 [2024-12-06 21:28:49.373574] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1013:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:29.066 passed 00:06:29.066 00:06:29.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.066 suites 1 1 n/a 0 0 00:06:29.066 tests 10 10 10 0 0 00:06:29.066 asserts 584 584 584 0 n/a 00:06:29.066 00:06:29.066 Elapsed time = 0.004 seconds 00:06:29.066 [2024-12-06 21:28:49.373615] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1021:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:29.066 00:06:29.066 real 0m0.043s 00:06:29.066 user 0m0.018s 00:06:29.066 sys 0m0.025s 00:06:29.066 21:28:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.066 ************************************ 00:06:29.066 END TEST unittest_nvmf_rdma 00:06:29.066 ************************************ 00:06:29.066 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.066 21:28:49 -- unit/unittest.sh@245 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:29.066 21:28:49 -- unit/unittest.sh@249 -- # run_test unittest_scsi unittest_scsi 00:06:29.066 21:28:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.066 21:28:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.066 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.066 ************************************ 00:06:29.066 START TEST unittest_scsi 00:06:29.066 ************************************ 00:06:29.066 21:28:49 -- common/autotest_common.sh@1114 -- # unittest_scsi 00:06:29.066 21:28:49 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:29.066 00:06:29.066 00:06:29.066 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.066 http://cunit.sourceforge.net/ 00:06:29.066 00:06:29.066 00:06:29.066 Suite: dev_suite 00:06:29.066 Test: dev_destruct_null_dev ...passed 00:06:29.066 Test: dev_destruct_zero_luns ...passed 00:06:29.066 Test: dev_destruct_null_lun ...passed 00:06:29.066 Test: dev_destruct_success ...passed 00:06:29.066 Test: dev_construct_num_luns_zero ...passed 00:06:29.066 Test: dev_construct_no_lun_zero ...passed 00:06:29.066 Test: dev_construct_null_lun ...passed 00:06:29.066 Test: dev_construct_name_too_long ...passed 00:06:29.066 Test: dev_construct_success ...passed 00:06:29.066 Test: dev_construct_success_lun_zero_not_first ...[2024-12-06 21:28:49.471231] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:29.066 [2024-12-06 21:28:49.471543] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:29.066 [2024-12-06 21:28:49.471588] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:29.066 [2024-12-06 21:28:49.471642] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:29.066 passed 00:06:29.066 Test: dev_queue_mgmt_task_success ...passed 00:06:29.066 Test: dev_queue_task_success ...passed 00:06:29.066 Test: dev_stop_success ...passed 00:06:29.066 Test: dev_add_port_max_ports ...passed 00:06:29.066 Test: dev_add_port_construct_failure1 ...passed 00:06:29.066 Test: dev_add_port_construct_failure2 ...[2024-12-06 21:28:49.471965] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:29.066 [2024-12-06 21:28:49.472012] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:29.066 passed 00:06:29.066 Test: dev_add_port_success1 ...passed 00:06:29.066 Test: dev_add_port_success2 ...passed 00:06:29.066 Test: dev_add_port_success3 ...passed 00:06:29.066 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:29.066 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:29.066 Test: dev_find_port_by_id_success ...passed 00:06:29.066 Test: dev_add_lun_bdev_not_found ...passed 00:06:29.066 Test: dev_add_lun_no_free_lun_id ...[2024-12-06 21:28:49.472060] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:29.066 passed 00:06:29.066 Test: dev_add_lun_success1 ...passed 00:06:29.066 Test: dev_add_lun_success2 ...passed 00:06:29.066 Test: dev_check_pending_tasks ...passed 00:06:29.066 Test: dev_iterate_luns ...[2024-12-06 21:28:49.472522] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:29.066 passed 00:06:29.066 Test: dev_find_free_lun ...passed 00:06:29.066 00:06:29.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.066 suites 1 1 n/a 0 0 00:06:29.066 tests 29 29 29 0 0 00:06:29.066 asserts 97 97 97 0 n/a 00:06:29.066 00:06:29.066 Elapsed time = 0.002 seconds 00:06:29.066 21:28:49 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:29.066 00:06:29.066 00:06:29.066 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.066 http://cunit.sourceforge.net/ 00:06:29.066 00:06:29.066 00:06:29.066 Suite: lun_suite 00:06:29.066 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:29.066 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:06:29.066 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:29.066 Test: lun_task_mgmt_execute_target_reset ...[2024-12-06 21:28:49.508311] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:29.066 [2024-12-06 21:28:49.508622] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:29.066 passed 00:06:29.066 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:29.066 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:29.066 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:29.066 Test: lun_append_task_null_lun_not_supported ...passed 00:06:29.066 Test: lun_execute_scsi_task_pending ...passed 00:06:29.066 Test: lun_execute_scsi_task_complete ...passed 00:06:29.066 Test: lun_execute_scsi_task_resize ...passed 00:06:29.066 Test: lun_destruct_success ...passed 00:06:29.066 Test: lun_construct_null_ctx ...passed 00:06:29.066 Test: lun_construct_success ...passed 00:06:29.066 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:29.066 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:29.066 Test: lun_check_pending_tasks_only_for_specific_initiator ...[2024-12-06 21:28:49.508746] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:29.066 [2024-12-06 21:28:49.508978] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:29.066 passed 00:06:29.066 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:29.066 00:06:29.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.066 suites 1 1 n/a 0 0 00:06:29.066 tests 18 18 18 0 0 00:06:29.066 asserts 153 153 153 0 n/a 00:06:29.066 00:06:29.066 Elapsed time = 0.001 seconds 00:06:29.066 21:28:49 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:29.066 00:06:29.066 00:06:29.066 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.066 http://cunit.sourceforge.net/ 00:06:29.066 00:06:29.066 00:06:29.066 Suite: scsi_suite 00:06:29.066 Test: scsi_init ...passed 00:06:29.066 00:06:29.066 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.066 suites 1 1 n/a 0 0 00:06:29.066 tests 1 1 1 0 0 00:06:29.066 asserts 1 1 1 0 n/a 00:06:29.066 00:06:29.066 Elapsed time = 0.000 seconds 00:06:29.067 21:28:49 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:29.325 00:06:29.325 00:06:29.325 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.325 http://cunit.sourceforge.net/ 00:06:29.325 00:06:29.325 00:06:29.325 Suite: translation_suite 00:06:29.325 Test: mode_select_6_test ...passed 00:06:29.325 Test: mode_select_6_test2 ...passed 00:06:29.325 Test: mode_sense_6_test ...passed 00:06:29.325 Test: mode_sense_10_test ...passed 00:06:29.325 Test: inquiry_evpd_test ...passed 00:06:29.325 Test: inquiry_standard_test ...passed 00:06:29.325 Test: inquiry_overflow_test ...passed 00:06:29.325 Test: task_complete_test ...passed 00:06:29.325 Test: lba_range_test ...passed 00:06:29.325 Test: xfer_len_test ...[2024-12-06 21:28:49.568474] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:29.325 passed 00:06:29.325 Test: xfer_test ...passed 00:06:29.325 Test: scsi_name_padding_test ...passed 00:06:29.325 Test: get_dif_ctx_test ...passed 00:06:29.325 Test: unmap_split_test ...passed 00:06:29.325 00:06:29.325 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.325 suites 1 1 n/a 0 0 00:06:29.325 tests 14 14 14 0 0 00:06:29.325 asserts 1200 1200 1200 0 n/a 00:06:29.325 00:06:29.325 Elapsed time = 0.005 seconds 00:06:29.325 21:28:49 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:29.325 00:06:29.325 00:06:29.325 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.325 http://cunit.sourceforge.net/ 00:06:29.325 00:06:29.325 00:06:29.325 Suite: reservation_suite 00:06:29.325 Test: test_reservation_register ...[2024-12-06 21:28:49.596246] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 passed 00:06:29.325 Test: test_reservation_reserve ...[2024-12-06 21:28:49.596605] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 [2024-12-06 21:28:49.596676] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:29.325 [2024-12-06 21:28:49.596731] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation typepassed 00:06:29.325 Test: test_reservation_preempt_non_all_regs ... doesn't match 00:06:29.325 [2024-12-06 21:28:49.596860] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 [2024-12-06 21:28:49.596923] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:29.325 passed 00:06:29.325 Test: test_reservation_preempt_all_regs ...[2024-12-06 21:28:49.597000] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 passed 00:06:29.325 Test: test_reservation_cmds_conflict ...[2024-12-06 21:28:49.597113] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 [2024-12-06 21:28:49.597180] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:29.325 [2024-12-06 21:28:49.597226] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:29.325 passed 00:06:29.325 Test: test_scsi2_reserve_release ...passed 00:06:29.325 Test: test_pr_with_scsi2_reserve_release ...[2024-12-06 21:28:49.597265] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:29.325 [2024-12-06 21:28:49.597305] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:29.325 [2024-12-06 21:28:49.597343] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:29.325 passed 00:06:29.325 00:06:29.325 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.325 suites 1 1 n/a 0 0 00:06:29.325 tests 7 7 7 0 0 00:06:29.325 asserts 257 257 257 0 n/a 00:06:29.325 00:06:29.325 Elapsed time = 0.001 seconds 00:06:29.325 [2024-12-06 21:28:49.597415] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:29.325 00:06:29.325 real 0m0.159s 00:06:29.325 user 0m0.078s 00:06:29.325 sys 0m0.084s 00:06:29.325 21:28:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.325 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.325 ************************************ 00:06:29.325 END TEST unittest_scsi 00:06:29.325 ************************************ 00:06:29.325 21:28:49 -- unit/unittest.sh@252 -- # uname -s 00:06:29.325 21:28:49 -- unit/unittest.sh@252 -- # '[' Linux = Linux ']' 00:06:29.325 21:28:49 -- unit/unittest.sh@253 -- # run_test unittest_sock unittest_sock 00:06:29.325 21:28:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.325 21:28:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.325 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.325 ************************************ 00:06:29.325 START TEST unittest_sock 00:06:29.325 ************************************ 00:06:29.325 21:28:49 -- common/autotest_common.sh@1114 -- # unittest_sock 00:06:29.325 21:28:49 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:29.325 00:06:29.325 00:06:29.325 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.325 http://cunit.sourceforge.net/ 00:06:29.325 00:06:29.325 00:06:29.325 Suite: sock 00:06:29.325 Test: posix_sock ...passed 00:06:29.325 Test: ut_sock ...passed 00:06:29.325 Test: posix_sock_group ...passed 00:06:29.325 Test: ut_sock_group ...passed 00:06:29.325 Test: posix_sock_group_fairness ...passed 00:06:29.325 Test: _posix_sock_close ...passed 00:06:29.325 Test: sock_get_default_opts ...passed 00:06:29.325 Test: ut_sock_impl_get_set_opts ...passed 00:06:29.325 Test: posix_sock_impl_get_set_opts ...passed 00:06:29.325 Test: ut_sock_map ...passed 00:06:29.325 Test: override_impl_opts ...passed 00:06:29.325 Test: ut_sock_group_get_ctx ...passed 00:06:29.325 00:06:29.325 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.325 suites 1 1 n/a 0 0 00:06:29.325 tests 12 12 12 0 0 00:06:29.325 asserts 349 349 349 0 n/a 00:06:29.325 00:06:29.325 Elapsed time = 0.009 seconds 00:06:29.325 21:28:49 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:29.325 00:06:29.325 00:06:29.325 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.325 http://cunit.sourceforge.net/ 00:06:29.325 00:06:29.325 00:06:29.325 Suite: posix 00:06:29.325 Test: flush ...passed 00:06:29.325 00:06:29.325 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.325 suites 1 1 n/a 0 0 00:06:29.325 tests 1 1 1 0 0 00:06:29.325 asserts 28 28 28 0 n/a 00:06:29.325 00:06:29.325 Elapsed time = 0.000 seconds 00:06:29.325 21:28:49 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:29.325 00:06:29.325 real 0m0.089s 00:06:29.325 user 0m0.029s 00:06:29.325 sys 0m0.036s 00:06:29.325 21:28:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.325 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.325 ************************************ 00:06:29.325 END TEST unittest_sock 00:06:29.325 ************************************ 00:06:29.325 21:28:49 -- unit/unittest.sh@255 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:29.325 21:28:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.325 21:28:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.325 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.325 ************************************ 00:06:29.325 START TEST unittest_thread 00:06:29.325 ************************************ 00:06:29.325 21:28:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:29.583 00:06:29.583 00:06:29.583 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.583 http://cunit.sourceforge.net/ 00:06:29.583 00:06:29.583 00:06:29.583 Suite: io_channel 00:06:29.583 Test: thread_alloc ...passed 00:06:29.583 Test: thread_send_msg ...passed 00:06:29.583 Test: thread_poller ...passed 00:06:29.583 Test: poller_pause ...passed 00:06:29.583 Test: thread_for_each ...passed 00:06:29.583 Test: for_each_channel_remove ...passed 00:06:29.583 Test: for_each_channel_unreg ...[2024-12-06 21:28:49.852476] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2165:spdk_io_device_register: *ERROR*: io_device 0x770ad0f09640 already registered (old:0x513000000200 new:0x5130000003c0) 00:06:29.583 passed 00:06:29.583 Test: thread_name ...passed 00:06:29.583 Test: channel ...[2024-12-06 21:28:49.857095] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2299:spdk_get_io_channel: *ERROR*: could not find io_device 0x59a8e683e120 00:06:29.583 passed 00:06:29.583 Test: channel_destroy_races ...passed 00:06:29.583 Test: thread_exit_test ...[2024-12-06 21:28:49.862877] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 631:thread_exit: *ERROR*: thread 0x518000005c80 got timeout, and move it to the exited state forcefully 00:06:29.583 passed 00:06:29.583 Test: thread_update_stats_test ...passed 00:06:29.583 Test: nested_channel ...passed 00:06:29.583 Test: device_unregister_and_thread_exit_race ...passed 00:06:29.584 Test: cache_closest_timed_poller ...passed 00:06:29.584 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:29.584 Test: io_device_lookup ...passed 00:06:29.584 Test: spdk_spin ...[2024-12-06 21:28:49.875201] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3063:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:29.584 [2024-12-06 21:28:49.875258] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x770ad0f0a020 00:06:29.584 [2024-12-06 21:28:49.875307] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3101:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:29.584 [2024-12-06 21:28:49.877268] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:29.584 [2024-12-06 21:28:49.877318] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x770ad0f0a020 00:06:29.584 [2024-12-06 21:28:49.877360] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:29.584 [2024-12-06 21:28:49.877389] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x770ad0f0a020 00:06:29.584 [2024-12-06 21:28:49.877418] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3084:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:29.584 [2024-12-06 21:28:49.877464] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x770ad0f0a020 00:06:29.584 [2024-12-06 21:28:49.877496] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3045:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:29.584 [2024-12-06 21:28:49.877529] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x770ad0f0a020 00:06:29.584 passed 00:06:29.584 Test: for_each_channel_and_thread_exit_race ...passed 00:06:29.584 Test: for_each_thread_and_thread_exit_race ...passed 00:06:29.584 00:06:29.584 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.584 suites 1 1 n/a 0 0 00:06:29.584 tests 20 20 20 0 0 00:06:29.584 asserts 409 409 409 0 n/a 00:06:29.584 00:06:29.584 Elapsed time = 0.058 seconds 00:06:29.584 00:06:29.584 real 0m0.096s 00:06:29.584 user 0m0.065s 00:06:29.584 sys 0m0.031s 00:06:29.584 21:28:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.584 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 ************************************ 00:06:29.584 END TEST unittest_thread 00:06:29.584 ************************************ 00:06:29.584 21:28:49 -- unit/unittest.sh@256 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:29.584 21:28:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.584 21:28:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.584 21:28:49 -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 ************************************ 00:06:29.584 START TEST unittest_iobuf 00:06:29.584 ************************************ 00:06:29.584 21:28:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:29.584 00:06:29.584 00:06:29.584 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.584 http://cunit.sourceforge.net/ 00:06:29.584 00:06:29.584 00:06:29.584 Suite: io_channel 00:06:29.584 Test: iobuf ...passed 00:06:29.584 Test: iobuf_cache ...[2024-12-06 21:28:49.985498] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:29.584 [2024-12-06 21:28:49.985710] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:29.584 [2024-12-06 21:28:49.985817] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:29.584 [2024-12-06 21:28:49.985857] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 317:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:29.584 [2024-12-06 21:28:49.985942] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 302:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:29.584 [2024-12-06 21:28:49.985989] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 305:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:29.584 passed 00:06:29.584 00:06:29.584 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.584 suites 1 1 n/a 0 0 00:06:29.584 tests 2 2 2 0 0 00:06:29.584 asserts 107 107 107 0 n/a 00:06:29.584 00:06:29.584 Elapsed time = 0.007 seconds 00:06:29.584 00:06:29.584 real 0m0.044s 00:06:29.584 user 0m0.025s 00:06:29.584 sys 0m0.020s 00:06:29.584 21:28:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.584 ************************************ 00:06:29.584 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 END TEST unittest_iobuf 00:06:29.584 ************************************ 00:06:29.584 21:28:50 -- unit/unittest.sh@257 -- # run_test unittest_util unittest_util 00:06:29.584 21:28:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.584 21:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.584 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:29.584 ************************************ 00:06:29.584 START TEST unittest_util 00:06:29.584 ************************************ 00:06:29.584 21:28:50 -- common/autotest_common.sh@1114 -- # unittest_util 00:06:29.584 21:28:50 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:29.584 00:06:29.584 00:06:29.584 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.584 http://cunit.sourceforge.net/ 00:06:29.584 00:06:29.584 00:06:29.584 Suite: base64 00:06:29.584 Test: test_base64_get_encoded_strlen ...passed 00:06:29.584 Test: test_base64_get_decoded_len ...passed 00:06:29.584 Test: test_base64_encode ...passed 00:06:29.584 Test: test_base64_decode ...passed 00:06:29.584 Test: test_base64_urlsafe_encode ...passed 00:06:29.584 Test: test_base64_urlsafe_decode ...passed 00:06:29.584 00:06:29.584 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.584 suites 1 1 n/a 0 0 00:06:29.584 tests 6 6 6 0 0 00:06:29.584 asserts 112 112 112 0 n/a 00:06:29.584 00:06:29.584 Elapsed time = 0.000 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: bit_array 00:06:29.843 Test: test_1bit ...passed 00:06:29.843 Test: test_64bit ...passed 00:06:29.843 Test: test_find ...passed 00:06:29.843 Test: test_resize ...passed 00:06:29.843 Test: test_errors ...passed 00:06:29.843 Test: test_count ...passed 00:06:29.843 Test: test_mask_store_load ...passed 00:06:29.843 Test: test_mask_clear ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 8 8 8 0 0 00:06:29.843 asserts 5075 5075 5075 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.002 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: cpuset 00:06:29.843 Test: test_cpuset ...passed 00:06:29.843 Test: test_cpuset_parse ...[2024-12-06 21:28:50.136106] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:29.843 [2024-12-06 21:28:50.136330] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:29.843 [2024-12-06 21:28:50.136373] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:29.843 [2024-12-06 21:28:50.136415] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:29.843 [2024-12-06 21:28:50.136769] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:29.843 [2024-12-06 21:28:50.136814] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:29.843 [2024-12-06 21:28:50.136844] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:29.843 [2024-12-06 21:28:50.136883] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:29.843 passed 00:06:29.843 Test: test_cpuset_fmt ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 3 3 3 0 0 00:06:29.843 asserts 65 65 65 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.004 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: crc16 00:06:29.843 Test: test_crc16_t10dif ...passed 00:06:29.843 Test: test_crc16_t10dif_seed ...passed 00:06:29.843 Test: test_crc16_t10dif_copy ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 3 3 3 0 0 00:06:29.843 asserts 5 5 5 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.000 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: crc32_ieee 00:06:29.843 Test: test_crc32_ieee ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 1 1 1 0 0 00:06:29.843 asserts 1 1 1 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.000 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: crc32c 00:06:29.843 Test: test_crc32c ...passed 00:06:29.843 Test: test_crc32c_nvme ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 2 2 2 0 0 00:06:29.843 asserts 16 16 16 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.000 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.843 http://cunit.sourceforge.net/ 00:06:29.843 00:06:29.843 00:06:29.843 Suite: crc64 00:06:29.843 Test: test_crc64_nvme ...passed 00:06:29.843 00:06:29.843 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.843 suites 1 1 n/a 0 0 00:06:29.843 tests 1 1 1 0 0 00:06:29.843 asserts 4 4 4 0 n/a 00:06:29.843 00:06:29.843 Elapsed time = 0.001 seconds 00:06:29.843 21:28:50 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:29.843 00:06:29.843 00:06:29.843 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.844 http://cunit.sourceforge.net/ 00:06:29.844 00:06:29.844 00:06:29.844 Suite: string 00:06:29.844 Test: test_parse_ip_addr ...passed 00:06:29.844 Test: test_str_chomp ...passed 00:06:29.844 Test: test_parse_capacity ...passed 00:06:29.844 Test: test_sprintf_append_realloc ...passed 00:06:29.844 Test: test_strtol ...passed 00:06:29.844 Test: test_strtoll ...passed 00:06:29.844 Test: test_strarray ...passed 00:06:29.844 Test: test_strcpy_replace ...passed 00:06:29.844 00:06:29.844 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.844 suites 1 1 n/a 0 0 00:06:29.844 tests 8 8 8 0 0 00:06:29.844 asserts 161 161 161 0 n/a 00:06:29.844 00:06:29.844 Elapsed time = 0.001 seconds 00:06:29.844 21:28:50 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:29.844 00:06:29.844 00:06:29.844 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.844 http://cunit.sourceforge.net/ 00:06:29.844 00:06:29.844 00:06:29.844 Suite: dif 00:06:29.844 Test: dif_generate_and_verify_test ...[2024-12-06 21:28:50.305538] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:29.844 [2024-12-06 21:28:50.305915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:29.844 [2024-12-06 21:28:50.306207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:29.844 [2024-12-06 21:28:50.306511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:29.844 [2024-12-06 21:28:50.306794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:29.844 [2024-12-06 21:28:50.307078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:29.844 passed 00:06:29.844 Test: dif_disable_check_test ...[2024-12-06 21:28:50.308115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:29.844 [2024-12-06 21:28:50.308407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:29.844 [2024-12-06 21:28:50.308704] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:29.844 passed 00:06:29.844 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-12-06 21:28:50.309762] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:29.844 [2024-12-06 21:28:50.310095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:29.844 [2024-12-06 21:28:50.310401] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:29.844 [2024-12-06 21:28:50.310767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:29.844 [2024-12-06 21:28:50.311065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:29.844 [2024-12-06 21:28:50.311371] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:29.844 [2024-12-06 21:28:50.311672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:29.844 [2024-12-06 21:28:50.311965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:29.844 [2024-12-06 21:28:50.312278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:29.844 [2024-12-06 21:28:50.312634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:29.844 [2024-12-06 21:28:50.312940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:29.844 passed 00:06:29.844 Test: dif_apptag_mask_test ...[2024-12-06 21:28:50.313242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:29.844 [2024-12-06 21:28:50.313545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:29.844 passed 00:06:29.844 Test: dif_sec_512_md_0_error_test ...[2024-12-06 21:28:50.313740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:29.844 passed 00:06:29.844 Test: dif_sec_4096_md_0_error_test ...passed 00:06:29.844 Test: dif_sec_4100_md_128_error_test ...[2024-12-06 21:28:50.313783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:29.844 [2024-12-06 21:28:50.313810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:29.844 [2024-12-06 21:28:50.313848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:29.844 [2024-12-06 21:28:50.313875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:29.844 passed 00:06:29.844 Test: dif_guard_seed_test ...passed 00:06:29.844 Test: dif_guard_value_test ...passed 00:06:29.844 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:29.844 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:29.844 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:29.844 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:29.844 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:30.106 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:30.106 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-06 21:28:50.358252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:06:30.106 [2024-12-06 21:28:50.360746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fa21, Actual=fe21 00:06:30.106 [2024-12-06 21:28:50.363203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.365668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.368110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.106 [2024-12-06 21:28:50.370577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.106 [2024-12-06 21:28:50.373014] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=79b2 00:06:30.106 [2024-12-06 21:28:50.374977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fe21, Actual=f761 00:06:30.106 [2024-12-06 21:28:50.376962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:06:30.106 [2024-12-06 21:28:50.379385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574260, Actual=38574660 00:06:30.106 [2024-12-06 21:28:50.381843] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.384291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.386732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.106 [2024-12-06 21:28:50.389208] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.106 [2024-12-06 21:28:50.391644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.106 [2024-12-06 21:28:50.393611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=38574660, Actual=1e3718ab 00:06:30.106 [2024-12-06 21:28:50.395565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.106 [2024-12-06 21:28:50.398019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.106 [2024-12-06 21:28:50.400476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.402941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.405376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.106 [2024-12-06 21:28:50.407842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.106 [2024-12-06 21:28:50.410286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.106 [2024-12-06 21:28:50.412247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.106 passed 00:06:30.106 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-12-06 21:28:50.413294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.106 [2024-12-06 21:28:50.413606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:06:30.106 [2024-12-06 21:28:50.413880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.414175] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.414474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.106 [2024-12-06 21:28:50.414775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.106 [2024-12-06 21:28:50.415079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.106 [2024-12-06 21:28:50.415289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f761 00:06:30.106 [2024-12-06 21:28:50.415496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.106 [2024-12-06 21:28:50.415771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:06:30.106 [2024-12-06 21:28:50.416062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.416368] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.106 [2024-12-06 21:28:50.416679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.106 [2024-12-06 21:28:50.416959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.106 [2024-12-06 21:28:50.417242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.106 [2024-12-06 21:28:50.417471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e3718ab 00:06:30.106 [2024-12-06 21:28:50.417681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.106 [2024-12-06 21:28:50.417955] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.107 [2024-12-06 21:28:50.418237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.418520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.418792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.419137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.419427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.107 [2024-12-06 21:28:50.419644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.107 passed 00:06:30.107 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-12-06 21:28:50.419878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.107 [2024-12-06 21:28:50.420182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:06:30.107 [2024-12-06 21:28:50.420491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.420783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.421069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.421348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.421656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.107 [2024-12-06 21:28:50.421875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f761 00:06:30.107 [2024-12-06 21:28:50.422082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.107 [2024-12-06 21:28:50.422374] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:06:30.107 [2024-12-06 21:28:50.422676] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.422973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.423260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.423574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.423857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.107 [2024-12-06 21:28:50.424057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e3718ab 00:06:30.107 [2024-12-06 21:28:50.424273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.107 [2024-12-06 21:28:50.424582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.107 [2024-12-06 21:28:50.424887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.425180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.425492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.425777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.426065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.107 [2024-12-06 21:28:50.426271] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.107 passed 00:06:30.107 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-12-06 21:28:50.426515] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.107 [2024-12-06 21:28:50.426809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:06:30.107 [2024-12-06 21:28:50.427095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.427397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.427702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.428006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.428289] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.107 [2024-12-06 21:28:50.428525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f761 00:06:30.107 [2024-12-06 21:28:50.428735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.107 [2024-12-06 21:28:50.429034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:06:30.107 [2024-12-06 21:28:50.429330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.429632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.429927] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.430213] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.430510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.107 [2024-12-06 21:28:50.430723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e3718ab 00:06:30.107 [2024-12-06 21:28:50.430930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.107 [2024-12-06 21:28:50.431225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.107 [2024-12-06 21:28:50.431514] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.431804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.432100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.432402] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.432699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.107 [2024-12-06 21:28:50.432894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.107 passed 00:06:30.107 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-12-06 21:28:50.433122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.107 [2024-12-06 21:28:50.433414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:06:30.107 [2024-12-06 21:28:50.433732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.434033] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.434325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.434631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.107 [2024-12-06 21:28:50.434924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.107 [2024-12-06 21:28:50.435145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f761 00:06:30.107 passed 00:06:30.107 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-12-06 21:28:50.435385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.107 [2024-12-06 21:28:50.435682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:06:30.107 [2024-12-06 21:28:50.435975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.436277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.436586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.436891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.107 [2024-12-06 21:28:50.437167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.107 [2024-12-06 21:28:50.437372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e3718ab 00:06:30.107 [2024-12-06 21:28:50.437610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.107 [2024-12-06 21:28:50.437907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.107 [2024-12-06 21:28:50.438205] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.438517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.107 [2024-12-06 21:28:50.438810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.107 [2024-12-06 21:28:50.439109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.108 [2024-12-06 21:28:50.439385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.108 [2024-12-06 21:28:50.439609] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.108 passed 00:06:30.108 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-12-06 21:28:50.439844] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.108 [2024-12-06 21:28:50.440149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:06:30.108 [2024-12-06 21:28:50.440426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.440726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.441018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.108 [2024-12-06 21:28:50.441317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.108 [2024-12-06 21:28:50.441611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.108 [2024-12-06 21:28:50.441818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=f761 00:06:30.108 passed 00:06:30.108 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-12-06 21:28:50.442061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.108 [2024-12-06 21:28:50.442356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574260, Actual=38574660 00:06:30.108 [2024-12-06 21:28:50.442664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.442966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.443253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.108 [2024-12-06 21:28:50.443562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.108 [2024-12-06 21:28:50.443858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.108 [2024-12-06 21:28:50.444068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=1e3718ab 00:06:30.108 [2024-12-06 21:28:50.444315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.108 [2024-12-06 21:28:50.444634] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010e2d4837a266, Actual=88010a2d4837a266 00:06:30.108 [2024-12-06 21:28:50.444940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.445237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.445544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.108 [2024-12-06 21:28:50.445857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.108 [2024-12-06 21:28:50.446133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.108 [2024-12-06 21:28:50.446339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=3bd51b6163e8c363 00:06:30.108 passed 00:06:30.108 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:30.108 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:30.108 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:30.108 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-06 21:28:50.490378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:06:30.108 [2024-12-06 21:28:50.491556] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=5a9f, Actual=5e9f 00:06:30.108 [2024-12-06 21:28:50.492687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.493838] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.494967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.108 [2024-12-06 21:28:50.496147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.108 [2024-12-06 21:28:50.497267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=79b2 00:06:30.108 [2024-12-06 21:28:50.498434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=b3cc 00:06:30.108 [2024-12-06 21:28:50.499635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:06:30.108 [2024-12-06 21:28:50.500781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=91ea24ee, Actual=91ea20ee 00:06:30.108 [2024-12-06 21:28:50.501958] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.503095] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.504252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.108 [2024-12-06 21:28:50.505386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.108 [2024-12-06 21:28:50.506587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.108 [2024-12-06 21:28:50.507723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=ef700a10 00:06:30.108 [2024-12-06 21:28:50.508850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.108 [2024-12-06 21:28:50.510015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2467efb7ee1d927a, Actual=2467ebb7ee1d927a 00:06:30.108 [2024-12-06 21:28:50.511114] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.512267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.513390] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.108 [2024-12-06 21:28:50.514540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.108 [2024-12-06 21:28:50.515670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.108 passed 00:06:30.108 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-06 21:28:50.516806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=9ecda02464d6de62 00:06:30.108 [2024-12-06 21:28:50.517192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.108 [2024-12-06 21:28:50.517476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:06:30.108 [2024-12-06 21:28:50.517729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.517978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.518236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.108 [2024-12-06 21:28:50.518533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.108 [2024-12-06 21:28:50.518783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.108 [2024-12-06 21:28:50.519056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=2456 00:06:30.108 [2024-12-06 21:28:50.519328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.108 [2024-12-06 21:28:50.519603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:06:30.108 [2024-12-06 21:28:50.519855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.520134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.520386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.108 [2024-12-06 21:28:50.520664] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.108 [2024-12-06 21:28:50.520943] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.108 [2024-12-06 21:28:50.521216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ccc61e67 00:06:30.108 [2024-12-06 21:28:50.521486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.108 [2024-12-06 21:28:50.521751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:06:30.108 [2024-12-06 21:28:50.522005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.522266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.108 [2024-12-06 21:28:50.522542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.109 [2024-12-06 21:28:50.522807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.109 [2024-12-06 21:28:50.523061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.109 [2024-12-06 21:28:50.523317] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=8bb23b577d0551a7 00:06:30.109 passed 00:06:30.109 Test: dix_sec_512_md_0_error ...passed 00:06:30.109 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-12-06 21:28:50.523385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:30.109 passed 00:06:30.109 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:30.109 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:30.109 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:30.109 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:30.109 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:30.109 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:30.109 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:30.109 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:30.109 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-06 21:28:50.567154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=f94c, Actual=fd4c 00:06:30.109 [2024-12-06 21:28:50.568357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=5a9f, Actual=5e9f 00:06:30.109 [2024-12-06 21:28:50.569539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.570641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.571769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.109 [2024-12-06 21:28:50.572917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=45e 00:06:30.109 [2024-12-06 21:28:50.574036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=fd4c, Actual=79b2 00:06:30.109 [2024-12-06 21:28:50.575127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=ba8c, Actual=b3cc 00:06:30.109 [2024-12-06 21:28:50.576256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab757ed, Actual=1ab753ed 00:06:30.109 [2024-12-06 21:28:50.577347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=91ea24ee, Actual=91ea20ee 00:06:30.109 [2024-12-06 21:28:50.578477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.579592] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.580726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.109 [2024-12-06 21:28:50.581835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=4000000005e 00:06:30.109 [2024-12-06 21:28:50.582959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.109 [2024-12-06 21:28:50.584054] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=c91054db, Actual=ef700a10 00:06:30.109 [2024-12-06 21:28:50.585149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.109 [2024-12-06 21:28:50.586298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2467efb7ee1d927a, Actual=2467ebb7ee1d927a 00:06:30.109 [2024-12-06 21:28:50.587419] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.588559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=94, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.589675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.109 [2024-12-06 21:28:50.590785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=94, Expected=5e, Actual=400005e 00:06:30.109 [2024-12-06 21:28:50.591891] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.109 passed 00:06:30.109 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-06 21:28:50.592988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=94, Expected=2d19b1684f09bf67, Actual=9ecda02464d6de62 00:06:30.109 [2024-12-06 21:28:50.593367] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:06:30.109 [2024-12-06 21:28:50.593642] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:06:30.109 [2024-12-06 21:28:50.593918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.594179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.594459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.109 [2024-12-06 21:28:50.594719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:06:30.109 [2024-12-06 21:28:50.594965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=79b2 00:06:30.109 [2024-12-06 21:28:50.595214] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=2456 00:06:30.109 [2024-12-06 21:28:50.595482] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab757ed, Actual=1ab753ed 00:06:30.109 [2024-12-06 21:28:50.595738] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=b25c3099, Actual=b25c3499 00:06:30.109 [2024-12-06 21:28:50.595990] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.596240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.596512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.109 [2024-12-06 21:28:50.596772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000000058 00:06:30.109 [2024-12-06 21:28:50.597036] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=58ff1bc8 00:06:30.109 [2024-12-06 21:28:50.597277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ccc61e67 00:06:30.109 [2024-12-06 21:28:50.597548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a3728ecc20d3, Actual=a576a7728ecc20d3 00:06:30.109 [2024-12-06 21:28:50.597790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=311874c4f7ce1dbf, Actual=311870c4f7ce1dbf 00:06:30.109 [2024-12-06 21:28:50.598047] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.598291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:06:30.109 [2024-12-06 21:28:50.598551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.109 [2024-12-06 21:28:50.598794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:06:30.109 [2024-12-06 21:28:50.599061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8d473c6287da6b6 00:06:30.109 [2024-12-06 21:28:50.599301] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=8bb23b577d0551a7 00:06:30.109 passed 00:06:30.367 Test: set_md_interleave_iovs_test ...passed 00:06:30.367 Test: set_md_interleave_iovs_split_test ...passed 00:06:30.367 Test: dif_generate_stream_pi_16_test ...passed 00:06:30.367 Test: dif_generate_stream_test ...passed 00:06:30.367 Test: set_md_interleave_iovs_alignment_test ...[2024-12-06 21:28:50.607063] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:30.367 passed 00:06:30.367 Test: dif_generate_split_test ...passed 00:06:30.367 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:30.367 Test: dif_verify_split_test ...passed 00:06:30.367 Test: dif_verify_stream_multi_segments_test ...passed 00:06:30.367 Test: update_crc32c_pi_16_test ...passed 00:06:30.367 Test: update_crc32c_test ...passed 00:06:30.367 Test: dif_update_crc32c_split_test ...passed 00:06:30.367 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:30.367 Test: get_range_with_md_test ...passed 00:06:30.367 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:30.367 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:30.367 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:30.367 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:30.368 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:30.368 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:30.368 Test: dif_generate_and_verify_unmap_test ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 1 1 n/a 0 0 00:06:30.368 tests 79 79 79 0 0 00:06:30.368 asserts 3584 3584 3584 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.338 seconds 00:06:30.368 21:28:50 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:30.368 00:06:30.368 00:06:30.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.368 http://cunit.sourceforge.net/ 00:06:30.368 00:06:30.368 00:06:30.368 Suite: iov 00:06:30.368 Test: test_single_iov ...passed 00:06:30.368 Test: test_simple_iov ...passed 00:06:30.368 Test: test_complex_iov ...passed 00:06:30.368 Test: test_iovs_to_buf ...passed 00:06:30.368 Test: test_buf_to_iovs ...passed 00:06:30.368 Test: test_memset ...passed 00:06:30.368 Test: test_iov_one ...passed 00:06:30.368 Test: test_iov_xfer ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 1 1 n/a 0 0 00:06:30.368 tests 8 8 8 0 0 00:06:30.368 asserts 156 156 156 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.000 seconds 00:06:30.368 21:28:50 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:30.368 00:06:30.368 00:06:30.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.368 http://cunit.sourceforge.net/ 00:06:30.368 00:06:30.368 00:06:30.368 Suite: math 00:06:30.368 Test: test_serial_number_arithmetic ...passed 00:06:30.368 Suite: erase 00:06:30.368 Test: test_memset_s ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 2 2 n/a 0 0 00:06:30.368 tests 2 2 2 0 0 00:06:30.368 asserts 18 18 18 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.000 seconds 00:06:30.368 21:28:50 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:30.368 00:06:30.368 00:06:30.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.368 http://cunit.sourceforge.net/ 00:06:30.368 00:06:30.368 00:06:30.368 Suite: pipe 00:06:30.368 Test: test_create_destroy ...passed 00:06:30.368 Test: test_write_get_buffer ...passed 00:06:30.368 Test: test_write_advance ...passed 00:06:30.368 Test: test_read_get_buffer ...passed 00:06:30.368 Test: test_read_advance ...passed 00:06:30.368 Test: test_data ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 1 1 n/a 0 0 00:06:30.368 tests 6 6 6 0 0 00:06:30.368 asserts 250 250 250 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.000 seconds 00:06:30.368 21:28:50 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:30.368 00:06:30.368 00:06:30.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.368 http://cunit.sourceforge.net/ 00:06:30.368 00:06:30.368 00:06:30.368 Suite: xor 00:06:30.368 Test: test_xor_gen ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 1 1 n/a 0 0 00:06:30.368 tests 1 1 1 0 0 00:06:30.368 asserts 17 17 17 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.006 seconds 00:06:30.368 00:06:30.368 real 0m0.712s 00:06:30.368 user 0m0.518s 00:06:30.368 sys 0m0.195s 00:06:30.368 21:28:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.368 ************************************ 00:06:30.368 END TEST unittest_util 00:06:30.368 ************************************ 00:06:30.368 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.368 21:28:50 -- unit/unittest.sh@258 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:30.368 21:28:50 -- unit/unittest.sh@259 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:30.368 21:28:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.368 21:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.368 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.368 ************************************ 00:06:30.368 START TEST unittest_vhost 00:06:30.368 ************************************ 00:06:30.368 21:28:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:30.368 00:06:30.368 00:06:30.368 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.368 http://cunit.sourceforge.net/ 00:06:30.368 00:06:30.368 00:06:30.368 Suite: vhost_suite 00:06:30.368 Test: desc_to_iov_test ...[2024-12-06 21:28:50.845484] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:30.368 passed 00:06:30.368 Test: create_controller_test ...[2024-12-06 21:28:50.850906] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:30.368 [2024-12-06 21:28:50.851048] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:30.368 [2024-12-06 21:28:50.851207] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:30.368 [2024-12-06 21:28:50.851296] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:30.368 [2024-12-06 21:28:50.851357] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:30.368 [2024-12-06 21:28:50.851461] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-12-06 21:28:50.852770] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:30.368 passed 00:06:30.368 Test: session_find_by_vid_test ...passed 00:06:30.368 Test: remove_controller_test ...[2024-12-06 21:28:50.855331] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:30.368 passed 00:06:30.368 Test: vq_avail_ring_get_test ...passed 00:06:30.368 Test: vq_packed_ring_test ...passed 00:06:30.368 Test: vhost_blk_construct_test ...passed 00:06:30.368 00:06:30.368 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.368 suites 1 1 n/a 0 0 00:06:30.368 tests 7 7 7 0 0 00:06:30.368 asserts 145 145 145 0 n/a 00:06:30.368 00:06:30.368 Elapsed time = 0.015 seconds 00:06:30.626 00:06:30.626 real 0m0.057s 00:06:30.626 user 0m0.034s 00:06:30.626 sys 0m0.023s 00:06:30.626 21:28:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.626 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.626 ************************************ 00:06:30.626 END TEST unittest_vhost 00:06:30.626 ************************************ 00:06:30.627 21:28:50 -- unit/unittest.sh@261 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:30.627 21:28:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.627 21:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.627 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.627 ************************************ 00:06:30.627 START TEST unittest_dma 00:06:30.627 ************************************ 00:06:30.627 21:28:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:30.627 00:06:30.627 00:06:30.627 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.627 http://cunit.sourceforge.net/ 00:06:30.627 00:06:30.627 00:06:30.627 Suite: dma_suite 00:06:30.627 Test: test_dma ...[2024-12-06 21:28:50.949450] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:30.627 passed 00:06:30.627 00:06:30.627 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.627 suites 1 1 n/a 0 0 00:06:30.627 tests 1 1 1 0 0 00:06:30.627 asserts 50 50 50 0 n/a 00:06:30.627 00:06:30.627 Elapsed time = 0.001 seconds 00:06:30.627 00:06:30.627 real 0m0.029s 00:06:30.627 user 0m0.016s 00:06:30.627 sys 0m0.013s 00:06:30.627 21:28:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.627 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.627 ************************************ 00:06:30.627 END TEST unittest_dma 00:06:30.627 ************************************ 00:06:30.627 21:28:50 -- unit/unittest.sh@263 -- # run_test unittest_init unittest_init 00:06:30.627 21:28:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.627 21:28:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.627 21:28:50 -- common/autotest_common.sh@10 -- # set +x 00:06:30.627 ************************************ 00:06:30.627 START TEST unittest_init 00:06:30.627 ************************************ 00:06:30.627 21:28:51 -- common/autotest_common.sh@1114 -- # unittest_init 00:06:30.627 21:28:51 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:30.627 00:06:30.627 00:06:30.627 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.627 http://cunit.sourceforge.net/ 00:06:30.627 00:06:30.627 00:06:30.627 Suite: subsystem_suite 00:06:30.627 Test: subsystem_sort_test_depends_on_single ...passed 00:06:30.627 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:30.627 Test: subsystem_sort_test_missing_dependency ...[2024-12-06 21:28:51.029561] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:30.627 passed 00:06:30.627 00:06:30.627 Run Summary: Type Total Ran Passed Failed Inactive 00:06:30.627 suites 1 1 n/a 0 0 00:06:30.627 tests 3 3 3 0 0 00:06:30.627 asserts 20 20 20 0 n/a 00:06:30.627 00:06:30.627 Elapsed time = 0.000 seconds 00:06:30.627 [2024-12-06 21:28:51.029786] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:30.627 00:06:30.627 real 0m0.036s 00:06:30.627 user 0m0.016s 00:06:30.627 sys 0m0.020s 00:06:30.627 21:28:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:30.627 21:28:51 -- common/autotest_common.sh@10 -- # set +x 00:06:30.627 ************************************ 00:06:30.627 END TEST unittest_init 00:06:30.627 ************************************ 00:06:30.627 21:28:51 -- unit/unittest.sh@265 -- # [[ y == y ]] 00:06:30.627 21:28:51 -- unit/unittest.sh@266 -- # hostname 00:06:30.627 21:28:51 -- unit/unittest.sh@266 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:30.885 geninfo: WARNING: invalid characters removed from testname! 00:07:03.066 21:29:20 -- unit/unittest.sh@267 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:07:04.999 21:29:25 -- unit/unittest.sh@268 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:07.533 21:29:27 -- unit/unittest.sh@269 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:10.069 21:29:30 -- unit/unittest.sh@270 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:13.361 21:29:33 -- unit/unittest.sh@271 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:15.892 21:29:35 -- unit/unittest.sh@272 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:17.795 21:29:37 -- unit/unittest.sh@273 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:07:17.795 21:29:37 -- unit/unittest.sh@274 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:18.363 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:07:18.363 Found 313 entries. 00:07:18.363 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:07:18.363 Writing .css and .png files. 00:07:18.363 Generating output. 00:07:18.363 Processing file include/linux/virtio_ring.h 00:07:18.622 Processing file include/spdk/nvmf_transport.h 00:07:18.623 Processing file include/spdk/nvme_spec.h 00:07:18.623 Processing file include/spdk/endian.h 00:07:18.623 Processing file include/spdk/base64.h 00:07:18.623 Processing file include/spdk/nvme.h 00:07:18.623 Processing file include/spdk/thread.h 00:07:18.623 Processing file include/spdk/mmio.h 00:07:18.623 Processing file include/spdk/histogram_data.h 00:07:18.623 Processing file include/spdk/trace.h 00:07:18.623 Processing file include/spdk/util.h 00:07:18.623 Processing file include/spdk/bdev_module.h 00:07:18.623 Processing file include/spdk_internal/virtio.h 00:07:18.623 Processing file include/spdk_internal/sgl.h 00:07:18.623 Processing file include/spdk_internal/sock.h 00:07:18.623 Processing file include/spdk_internal/utf.h 00:07:18.623 Processing file include/spdk_internal/nvme_tcp.h 00:07:18.623 Processing file include/spdk_internal/rdma.h 00:07:18.882 Processing file lib/accel/accel_rpc.c 00:07:18.882 Processing file lib/accel/accel_sw.c 00:07:18.882 Processing file lib/accel/accel.c 00:07:19.141 Processing file lib/bdev/bdev_zone.c 00:07:19.141 Processing file lib/bdev/part.c 00:07:19.141 Processing file lib/bdev/scsi_nvme.c 00:07:19.141 Processing file lib/bdev/bdev.c 00:07:19.141 Processing file lib/bdev/bdev_rpc.c 00:07:19.399 Processing file lib/blob/blobstore.h 00:07:19.399 Processing file lib/blob/blob_bs_dev.c 00:07:19.399 Processing file lib/blob/blobstore.c 00:07:19.399 Processing file lib/blob/request.c 00:07:19.399 Processing file lib/blob/zeroes.c 00:07:19.657 Processing file lib/blobfs/tree.c 00:07:19.657 Processing file lib/blobfs/blobfs.c 00:07:19.657 Processing file lib/conf/conf.c 00:07:19.657 Processing file lib/dma/dma.c 00:07:19.916 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:07:19.916 Processing file lib/env_dpdk/pci_dpdk.c 00:07:19.916 Processing file lib/env_dpdk/pci_idxd.c 00:07:19.916 Processing file lib/env_dpdk/threads.c 00:07:19.916 Processing file lib/env_dpdk/env.c 00:07:19.916 Processing file lib/env_dpdk/memory.c 00:07:19.916 Processing file lib/env_dpdk/pci.c 00:07:19.916 Processing file lib/env_dpdk/pci_virtio.c 00:07:19.916 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:07:19.916 Processing file lib/env_dpdk/pci_ioat.c 00:07:19.916 Processing file lib/env_dpdk/init.c 00:07:19.916 Processing file lib/env_dpdk/sigbus_handler.c 00:07:19.916 Processing file lib/env_dpdk/pci_vmd.c 00:07:19.916 Processing file lib/env_dpdk/pci_event.c 00:07:20.176 Processing file lib/event/app.c 00:07:20.176 Processing file lib/event/reactor.c 00:07:20.176 Processing file lib/event/log_rpc.c 00:07:20.176 Processing file lib/event/scheduler_static.c 00:07:20.176 Processing file lib/event/app_rpc.c 00:07:20.744 Processing file lib/ftl/ftl_l2p.c 00:07:20.744 Processing file lib/ftl/ftl_io.h 00:07:20.744 Processing file lib/ftl/ftl_trace.c 00:07:20.744 Processing file lib/ftl/ftl_layout.c 00:07:20.744 Processing file lib/ftl/ftl_io.c 00:07:20.744 Processing file lib/ftl/ftl_band_ops.c 00:07:20.744 Processing file lib/ftl/ftl_core.h 00:07:20.744 Processing file lib/ftl/ftl_debug.c 00:07:20.744 Processing file lib/ftl/ftl_rq.c 00:07:20.744 Processing file lib/ftl/ftl_sb.c 00:07:20.744 Processing file lib/ftl/ftl_nv_cache_io.h 00:07:20.744 Processing file lib/ftl/ftl_l2p_cache.c 00:07:20.744 Processing file lib/ftl/ftl_debug.h 00:07:20.744 Processing file lib/ftl/ftl_core.c 00:07:20.744 Processing file lib/ftl/ftl_writer.h 00:07:20.744 Processing file lib/ftl/ftl_writer.c 00:07:20.744 Processing file lib/ftl/ftl_nv_cache.h 00:07:20.744 Processing file lib/ftl/ftl_init.c 00:07:20.744 Processing file lib/ftl/ftl_reloc.c 00:07:20.744 Processing file lib/ftl/ftl_band.h 00:07:20.744 Processing file lib/ftl/ftl_nv_cache.c 00:07:20.744 Processing file lib/ftl/ftl_l2p_flat.c 00:07:20.744 Processing file lib/ftl/ftl_band.c 00:07:20.744 Processing file lib/ftl/ftl_p2l.c 00:07:20.744 Processing file lib/ftl/base/ftl_base_dev.c 00:07:20.744 Processing file lib/ftl/base/ftl_base_bdev.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt.c 00:07:21.003 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:07:21.003 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:07:21.003 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:07:21.003 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:07:21.003 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:07:21.003 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:07:21.003 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:07:21.261 Processing file lib/ftl/utils/ftl_property.c 00:07:21.261 Processing file lib/ftl/utils/ftl_bitmap.c 00:07:21.261 Processing file lib/ftl/utils/ftl_addr_utils.h 00:07:21.261 Processing file lib/ftl/utils/ftl_property.h 00:07:21.261 Processing file lib/ftl/utils/ftl_conf.c 00:07:21.261 Processing file lib/ftl/utils/ftl_md.c 00:07:21.261 Processing file lib/ftl/utils/ftl_mempool.c 00:07:21.261 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:07:21.261 Processing file lib/ftl/utils/ftl_df.h 00:07:21.529 Processing file lib/idxd/idxd_internal.h 00:07:21.529 Processing file lib/idxd/idxd_user.c 00:07:21.529 Processing file lib/idxd/idxd.c 00:07:21.529 Processing file lib/idxd/idxd_kernel.c 00:07:21.529 Processing file lib/init/subsystem_rpc.c 00:07:21.529 Processing file lib/init/subsystem.c 00:07:21.529 Processing file lib/init/rpc.c 00:07:21.529 Processing file lib/init/json_config.c 00:07:21.529 Processing file lib/ioat/ioat_internal.h 00:07:21.529 Processing file lib/ioat/ioat.c 00:07:22.107 Processing file lib/iscsi/init_grp.c 00:07:22.107 Processing file lib/iscsi/portal_grp.c 00:07:22.107 Processing file lib/iscsi/iscsi.h 00:07:22.107 Processing file lib/iscsi/iscsi.c 00:07:22.107 Processing file lib/iscsi/iscsi_rpc.c 00:07:22.107 Processing file lib/iscsi/task.h 00:07:22.107 Processing file lib/iscsi/task.c 00:07:22.107 Processing file lib/iscsi/conn.c 00:07:22.107 Processing file lib/iscsi/md5.c 00:07:22.107 Processing file lib/iscsi/iscsi_subsystem.c 00:07:22.107 Processing file lib/iscsi/param.c 00:07:22.107 Processing file lib/iscsi/tgt_node.c 00:07:22.107 Processing file lib/json/json_parse.c 00:07:22.107 Processing file lib/json/json_write.c 00:07:22.107 Processing file lib/json/json_util.c 00:07:22.107 Processing file lib/jsonrpc/jsonrpc_server.c 00:07:22.107 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:07:22.107 Processing file lib/jsonrpc/jsonrpc_client.c 00:07:22.107 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:07:22.107 Processing file lib/log/log.c 00:07:22.107 Processing file lib/log/log_flags.c 00:07:22.107 Processing file lib/log/log_deprecated.c 00:07:22.365 Processing file lib/lvol/lvol.c 00:07:22.365 Processing file lib/nbd/nbd_rpc.c 00:07:22.365 Processing file lib/nbd/nbd.c 00:07:22.365 Processing file lib/notify/notify.c 00:07:22.365 Processing file lib/notify/notify_rpc.c 00:07:23.331 Processing file lib/nvme/nvme_pcie.c 00:07:23.331 Processing file lib/nvme/nvme_tcp.c 00:07:23.331 Processing file lib/nvme/nvme_poll_group.c 00:07:23.331 Processing file lib/nvme/nvme_ctrlr.c 00:07:23.331 Processing file lib/nvme/nvme_cuse.c 00:07:23.331 Processing file lib/nvme/nvme_qpair.c 00:07:23.331 Processing file lib/nvme/nvme_vfio_user.c 00:07:23.331 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:07:23.331 Processing file lib/nvme/nvme_ns.c 00:07:23.331 Processing file lib/nvme/nvme_quirks.c 00:07:23.331 Processing file lib/nvme/nvme_internal.h 00:07:23.331 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:07:23.331 Processing file lib/nvme/nvme_rdma.c 00:07:23.331 Processing file lib/nvme/nvme_fabric.c 00:07:23.331 Processing file lib/nvme/nvme_io_msg.c 00:07:23.331 Processing file lib/nvme/nvme_zns.c 00:07:23.331 Processing file lib/nvme/nvme_pcie_internal.h 00:07:23.331 Processing file lib/nvme/nvme_opal.c 00:07:23.331 Processing file lib/nvme/nvme_transport.c 00:07:23.331 Processing file lib/nvme/nvme_pcie_common.c 00:07:23.331 Processing file lib/nvme/nvme.c 00:07:23.331 Processing file lib/nvme/nvme_ns_cmd.c 00:07:23.331 Processing file lib/nvme/nvme_discovery.c 00:07:23.331 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:07:23.589 Processing file lib/nvmf/ctrlr.c 00:07:23.589 Processing file lib/nvmf/nvmf_internal.h 00:07:23.589 Processing file lib/nvmf/transport.c 00:07:23.589 Processing file lib/nvmf/subsystem.c 00:07:23.589 Processing file lib/nvmf/rdma.c 00:07:23.589 Processing file lib/nvmf/nvmf.c 00:07:23.589 Processing file lib/nvmf/nvmf_rpc.c 00:07:23.589 Processing file lib/nvmf/tcp.c 00:07:23.589 Processing file lib/nvmf/ctrlr_discovery.c 00:07:23.589 Processing file lib/nvmf/ctrlr_bdev.c 00:07:23.847 Processing file lib/rdma/rdma_verbs.c 00:07:23.847 Processing file lib/rdma/common.c 00:07:23.847 Processing file lib/rpc/rpc.c 00:07:24.105 Processing file lib/scsi/port.c 00:07:24.106 Processing file lib/scsi/lun.c 00:07:24.106 Processing file lib/scsi/dev.c 00:07:24.106 Processing file lib/scsi/scsi_rpc.c 00:07:24.106 Processing file lib/scsi/task.c 00:07:24.106 Processing file lib/scsi/scsi_bdev.c 00:07:24.106 Processing file lib/scsi/scsi.c 00:07:24.106 Processing file lib/scsi/scsi_pr.c 00:07:24.106 Processing file lib/sock/sock_rpc.c 00:07:24.106 Processing file lib/sock/sock.c 00:07:24.106 Processing file lib/thread/thread.c 00:07:24.106 Processing file lib/thread/iobuf.c 00:07:24.364 Processing file lib/trace/trace.c 00:07:24.364 Processing file lib/trace/trace_flags.c 00:07:24.364 Processing file lib/trace/trace_rpc.c 00:07:24.364 Processing file lib/trace_parser/trace.cpp 00:07:24.364 Processing file lib/ublk/ublk_rpc.c 00:07:24.364 Processing file lib/ublk/ublk.c 00:07:24.364 Processing file lib/ut/ut.c 00:07:24.623 Processing file lib/ut_mock/mock.c 00:07:24.881 Processing file lib/util/uuid.c 00:07:24.881 Processing file lib/util/fd.c 00:07:24.881 Processing file lib/util/crc32c.c 00:07:24.881 Processing file lib/util/crc32.c 00:07:24.881 Processing file lib/util/bit_array.c 00:07:24.881 Processing file lib/util/crc32_ieee.c 00:07:24.881 Processing file lib/util/fd_group.c 00:07:24.881 Processing file lib/util/base64.c 00:07:24.881 Processing file lib/util/hexlify.c 00:07:24.881 Processing file lib/util/crc16.c 00:07:24.881 Processing file lib/util/file.c 00:07:24.881 Processing file lib/util/zipf.c 00:07:24.881 Processing file lib/util/dif.c 00:07:24.881 Processing file lib/util/strerror_tls.c 00:07:24.881 Processing file lib/util/iov.c 00:07:24.881 Processing file lib/util/pipe.c 00:07:24.881 Processing file lib/util/cpuset.c 00:07:24.881 Processing file lib/util/math.c 00:07:24.881 Processing file lib/util/string.c 00:07:24.881 Processing file lib/util/crc64.c 00:07:24.881 Processing file lib/util/xor.c 00:07:24.881 Processing file lib/vfio_user/host/vfio_user.c 00:07:24.881 Processing file lib/vfio_user/host/vfio_user_pci.c 00:07:25.139 Processing file lib/vhost/vhost_blk.c 00:07:25.139 Processing file lib/vhost/vhost.c 00:07:25.139 Processing file lib/vhost/vhost_internal.h 00:07:25.139 Processing file lib/vhost/vhost_rpc.c 00:07:25.139 Processing file lib/vhost/vhost_scsi.c 00:07:25.139 Processing file lib/vhost/rte_vhost_user.c 00:07:25.398 Processing file lib/virtio/virtio_vhost_user.c 00:07:25.398 Processing file lib/virtio/virtio_vfio_user.c 00:07:25.398 Processing file lib/virtio/virtio.c 00:07:25.398 Processing file lib/virtio/virtio_pci.c 00:07:25.398 Processing file lib/vmd/vmd.c 00:07:25.398 Processing file lib/vmd/led.c 00:07:25.398 Processing file module/accel/dsa/accel_dsa.c 00:07:25.398 Processing file module/accel/dsa/accel_dsa_rpc.c 00:07:25.657 Processing file module/accel/error/accel_error.c 00:07:25.657 Processing file module/accel/error/accel_error_rpc.c 00:07:25.657 Processing file module/accel/iaa/accel_iaa.c 00:07:25.657 Processing file module/accel/iaa/accel_iaa_rpc.c 00:07:25.657 Processing file module/accel/ioat/accel_ioat_rpc.c 00:07:25.657 Processing file module/accel/ioat/accel_ioat.c 00:07:25.915 Processing file module/bdev/aio/bdev_aio_rpc.c 00:07:25.915 Processing file module/bdev/aio/bdev_aio.c 00:07:25.915 Processing file module/bdev/delay/vbdev_delay.c 00:07:25.915 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:07:25.915 Processing file module/bdev/error/vbdev_error.c 00:07:25.915 Processing file module/bdev/error/vbdev_error_rpc.c 00:07:26.174 Processing file module/bdev/ftl/bdev_ftl.c 00:07:26.174 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:07:26.174 Processing file module/bdev/gpt/gpt.c 00:07:26.174 Processing file module/bdev/gpt/gpt.h 00:07:26.174 Processing file module/bdev/gpt/vbdev_gpt.c 00:07:26.174 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:07:26.174 Processing file module/bdev/iscsi/bdev_iscsi.c 00:07:26.433 Processing file module/bdev/lvol/vbdev_lvol.c 00:07:26.433 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:07:26.433 Processing file module/bdev/malloc/bdev_malloc.c 00:07:26.433 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:07:26.433 Processing file module/bdev/null/bdev_null.c 00:07:26.433 Processing file module/bdev/null/bdev_null_rpc.c 00:07:26.694 Processing file module/bdev/nvme/nvme_rpc.c 00:07:26.694 Processing file module/bdev/nvme/vbdev_opal.c 00:07:26.694 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:07:26.694 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:07:26.694 Processing file module/bdev/nvme/bdev_mdns_client.c 00:07:26.694 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:07:26.694 Processing file module/bdev/nvme/bdev_nvme.c 00:07:26.953 Processing file module/bdev/passthru/vbdev_passthru.c 00:07:26.953 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:07:27.212 Processing file module/bdev/raid/bdev_raid.c 00:07:27.212 Processing file module/bdev/raid/raid1.c 00:07:27.212 Processing file module/bdev/raid/bdev_raid_sb.c 00:07:27.212 Processing file module/bdev/raid/concat.c 00:07:27.212 Processing file module/bdev/raid/raid0.c 00:07:27.212 Processing file module/bdev/raid/bdev_raid_rpc.c 00:07:27.212 Processing file module/bdev/raid/bdev_raid.h 00:07:27.212 Processing file module/bdev/raid/raid5f.c 00:07:27.212 Processing file module/bdev/split/vbdev_split_rpc.c 00:07:27.212 Processing file module/bdev/split/vbdev_split.c 00:07:27.212 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:07:27.212 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:07:27.212 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:07:27.472 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:07:27.472 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:07:27.472 Processing file module/blob/bdev/blob_bdev.c 00:07:27.472 Processing file module/blobfs/bdev/blobfs_bdev.c 00:07:27.472 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:07:27.472 Processing file module/env_dpdk/env_dpdk_rpc.c 00:07:27.732 Processing file module/event/subsystems/accel/accel.c 00:07:27.732 Processing file module/event/subsystems/bdev/bdev.c 00:07:27.732 Processing file module/event/subsystems/iobuf/iobuf.c 00:07:27.732 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:07:27.732 Processing file module/event/subsystems/iscsi/iscsi.c 00:07:27.992 Processing file module/event/subsystems/nbd/nbd.c 00:07:27.992 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:07:27.992 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:07:27.992 Processing file module/event/subsystems/scheduler/scheduler.c 00:07:27.992 Processing file module/event/subsystems/scsi/scsi.c 00:07:27.992 Processing file module/event/subsystems/sock/sock.c 00:07:28.251 Processing file module/event/subsystems/ublk/ublk.c 00:07:28.251 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:07:28.251 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:07:28.251 Processing file module/event/subsystems/vmd/vmd.c 00:07:28.251 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:07:28.251 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:07:28.510 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:07:28.510 Processing file module/scheduler/gscheduler/gscheduler.c 00:07:28.510 Processing file module/sock/sock_kernel.h 00:07:28.510 Processing file module/sock/posix/posix.c 00:07:28.510 Writing directory view page. 00:07:28.510 Overall coverage rate: 00:07:28.510 lines......: 38.6% (39266 of 101740 lines) 00:07:28.510 functions..: 42.2% (3587 of 8494 functions) 00:07:28.510 00:07:28.510 00:07:28.510 ===================== 00:07:28.510 All unit tests passed 00:07:28.510 21:29:48 -- unit/unittest.sh@277 -- # set +x 00:07:28.510 ===================== 00:07:28.510 WARN: lcov not installed or SPDK built without coverage! 00:07:28.510 00:07:28.510 00:07:28.510 00:07:28.510 real 3m3.600s 00:07:28.510 user 2m39.257s 00:07:28.510 sys 0m14.780s 00:07:28.510 21:29:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.510 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.510 ************************************ 00:07:28.510 END TEST unittest 00:07:28.510 ************************************ 00:07:28.769 21:29:49 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:07:28.769 21:29:49 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:28.769 21:29:49 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:07:28.769 21:29:49 -- spdk/autotest.sh@160 -- # timing_enter lib 00:07:28.769 21:29:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.769 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 21:29:49 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:28.769 21:29:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.769 21:29:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.769 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 ************************************ 00:07:28.769 START TEST env 00:07:28.769 ************************************ 00:07:28.769 21:29:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:28.769 * Looking for test storage... 00:07:28.769 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:28.769 21:29:49 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:28.769 21:29:49 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:28.769 21:29:49 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:28.769 21:29:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:28.769 21:29:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:28.769 21:29:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:28.769 21:29:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:28.769 21:29:49 -- scripts/common.sh@335 -- # IFS=.-: 00:07:28.769 21:29:49 -- scripts/common.sh@335 -- # read -ra ver1 00:07:28.769 21:29:49 -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.769 21:29:49 -- scripts/common.sh@336 -- # read -ra ver2 00:07:28.769 21:29:49 -- scripts/common.sh@337 -- # local 'op=<' 00:07:28.769 21:29:49 -- scripts/common.sh@339 -- # ver1_l=2 00:07:28.769 21:29:49 -- scripts/common.sh@340 -- # ver2_l=1 00:07:28.769 21:29:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:28.769 21:29:49 -- scripts/common.sh@343 -- # case "$op" in 00:07:28.769 21:29:49 -- scripts/common.sh@344 -- # : 1 00:07:28.769 21:29:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:28.769 21:29:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.769 21:29:49 -- scripts/common.sh@364 -- # decimal 1 00:07:28.769 21:29:49 -- scripts/common.sh@352 -- # local d=1 00:07:28.769 21:29:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.769 21:29:49 -- scripts/common.sh@354 -- # echo 1 00:07:28.769 21:29:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:28.769 21:29:49 -- scripts/common.sh@365 -- # decimal 2 00:07:28.769 21:29:49 -- scripts/common.sh@352 -- # local d=2 00:07:28.769 21:29:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.769 21:29:49 -- scripts/common.sh@354 -- # echo 2 00:07:28.769 21:29:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:28.769 21:29:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:28.769 21:29:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:28.769 21:29:49 -- scripts/common.sh@367 -- # return 0 00:07:28.769 21:29:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.769 21:29:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:28.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.769 --rc genhtml_branch_coverage=1 00:07:28.769 --rc genhtml_function_coverage=1 00:07:28.769 --rc genhtml_legend=1 00:07:28.769 --rc geninfo_all_blocks=1 00:07:28.769 --rc geninfo_unexecuted_blocks=1 00:07:28.769 00:07:28.769 ' 00:07:28.769 21:29:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:28.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.769 --rc genhtml_branch_coverage=1 00:07:28.769 --rc genhtml_function_coverage=1 00:07:28.769 --rc genhtml_legend=1 00:07:28.769 --rc geninfo_all_blocks=1 00:07:28.769 --rc geninfo_unexecuted_blocks=1 00:07:28.769 00:07:28.769 ' 00:07:28.769 21:29:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:28.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.769 --rc genhtml_branch_coverage=1 00:07:28.769 --rc genhtml_function_coverage=1 00:07:28.769 --rc genhtml_legend=1 00:07:28.769 --rc geninfo_all_blocks=1 00:07:28.769 --rc geninfo_unexecuted_blocks=1 00:07:28.769 00:07:28.769 ' 00:07:28.770 21:29:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:28.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.770 --rc genhtml_branch_coverage=1 00:07:28.770 --rc genhtml_function_coverage=1 00:07:28.770 --rc genhtml_legend=1 00:07:28.770 --rc geninfo_all_blocks=1 00:07:28.770 --rc geninfo_unexecuted_blocks=1 00:07:28.770 00:07:28.770 ' 00:07:28.770 21:29:49 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:28.770 21:29:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:28.770 21:29:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.770 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:28.770 ************************************ 00:07:28.770 START TEST env_memory 00:07:28.770 ************************************ 00:07:28.770 21:29:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:29.028 00:07:29.028 00:07:29.028 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.028 http://cunit.sourceforge.net/ 00:07:29.028 00:07:29.028 00:07:29.028 Suite: memory 00:07:29.028 Test: alloc and free memory map ...[2024-12-06 21:29:49.325572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:29.028 passed 00:07:29.028 Test: mem map translation ...[2024-12-06 21:29:49.389256] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:29.028 [2024-12-06 21:29:49.389332] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:29.028 [2024-12-06 21:29:49.389459] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:29.028 [2024-12-06 21:29:49.389518] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:29.028 passed 00:07:29.028 Test: mem map registration ...[2024-12-06 21:29:49.489690] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:29.028 [2024-12-06 21:29:49.489760] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:29.288 passed 00:07:29.288 Test: mem map adjacent registrations ...passed 00:07:29.288 00:07:29.288 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.288 suites 1 1 n/a 0 0 00:07:29.288 tests 4 4 4 0 0 00:07:29.288 asserts 152 152 152 0 n/a 00:07:29.288 00:07:29.288 Elapsed time = 0.354 seconds 00:07:29.288 00:07:29.288 real 0m0.385s 00:07:29.288 user 0m0.368s 00:07:29.288 sys 0m0.017s 00:07:29.288 21:29:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.288 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:29.288 ************************************ 00:07:29.288 END TEST env_memory 00:07:29.288 ************************************ 00:07:29.288 21:29:49 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:29.288 21:29:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.288 21:29:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.288 21:29:49 -- common/autotest_common.sh@10 -- # set +x 00:07:29.288 ************************************ 00:07:29.288 START TEST env_vtophys 00:07:29.288 ************************************ 00:07:29.288 21:29:49 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:29.288 EAL: lib.eal log level changed from notice to debug 00:07:29.288 EAL: Detected lcore 0 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 1 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 2 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 3 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 4 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 5 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 6 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 7 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 8 as core 0 on socket 0 00:07:29.288 EAL: Detected lcore 9 as core 0 on socket 0 00:07:29.288 EAL: Maximum logical cores by configuration: 128 00:07:29.288 EAL: Detected CPU lcores: 10 00:07:29.288 EAL: Detected NUMA nodes: 1 00:07:29.288 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:29.288 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:29.288 EAL: Checking presence of .so 'librte_eal.so' 00:07:29.288 EAL: Detected static linkage of DPDK 00:07:29.288 EAL: No shared files mode enabled, IPC will be disabled 00:07:29.288 EAL: Selected IOVA mode 'PA' 00:07:29.547 EAL: Probing VFIO support... 00:07:29.547 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:29.547 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:29.547 EAL: Ask a virtual area of 0x2e000 bytes 00:07:29.547 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:29.547 EAL: Setting up physically contiguous memory... 00:07:29.547 EAL: Setting maximum number of open files to 1048576 00:07:29.547 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:29.547 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:29.547 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.547 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:29.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.547 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.547 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:29.547 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:29.547 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.547 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:29.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.547 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.547 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:29.547 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:29.547 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.547 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:29.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.547 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.547 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:29.547 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:29.547 EAL: Ask a virtual area of 0x61000 bytes 00:07:29.547 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:29.547 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:29.547 EAL: Ask a virtual area of 0x400000000 bytes 00:07:29.547 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:29.547 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:29.547 EAL: Hugepages will be freed exactly as allocated. 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: TSC frequency is ~2200000 KHz 00:07:29.547 EAL: Main lcore 0 is ready (tid=7608948dba80;cpuset=[0]) 00:07:29.547 EAL: Trying to obtain current memory policy. 00:07:29.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.547 EAL: Restoring previous memory policy: 0 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was expanded by 2MB 00:07:29.547 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:29.547 EAL: Mem event callback 'spdk:(nil)' registered 00:07:29.547 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:29.547 00:07:29.547 00:07:29.547 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.547 http://cunit.sourceforge.net/ 00:07:29.547 00:07:29.547 00:07:29.547 Suite: components_suite 00:07:29.547 Test: vtophys_malloc_test ...passed 00:07:29.547 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:29.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.547 EAL: Restoring previous memory policy: 4 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was expanded by 4MB 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was shrunk by 4MB 00:07:29.547 EAL: Trying to obtain current memory policy. 00:07:29.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.547 EAL: Restoring previous memory policy: 4 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was expanded by 6MB 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was shrunk by 6MB 00:07:29.547 EAL: Trying to obtain current memory policy. 00:07:29.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.547 EAL: Restoring previous memory policy: 4 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was expanded by 10MB 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was shrunk by 10MB 00:07:29.547 EAL: Trying to obtain current memory policy. 00:07:29.547 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.547 EAL: Restoring previous memory policy: 4 00:07:29.547 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.547 EAL: request: mp_malloc_sync 00:07:29.547 EAL: No shared files mode enabled, IPC is disabled 00:07:29.547 EAL: Heap on socket 0 was expanded by 18MB 00:07:29.807 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.807 EAL: request: mp_malloc_sync 00:07:29.807 EAL: No shared files mode enabled, IPC is disabled 00:07:29.807 EAL: Heap on socket 0 was shrunk by 18MB 00:07:29.807 EAL: Trying to obtain current memory policy. 00:07:29.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.807 EAL: Restoring previous memory policy: 4 00:07:29.807 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.807 EAL: request: mp_malloc_sync 00:07:29.807 EAL: No shared files mode enabled, IPC is disabled 00:07:29.807 EAL: Heap on socket 0 was expanded by 34MB 00:07:29.807 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.807 EAL: request: mp_malloc_sync 00:07:29.807 EAL: No shared files mode enabled, IPC is disabled 00:07:29.807 EAL: Heap on socket 0 was shrunk by 34MB 00:07:29.807 EAL: Trying to obtain current memory policy. 00:07:29.807 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.807 EAL: Restoring previous memory policy: 4 00:07:29.807 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.807 EAL: request: mp_malloc_sync 00:07:29.807 EAL: No shared files mode enabled, IPC is disabled 00:07:29.807 EAL: Heap on socket 0 was expanded by 66MB 00:07:29.807 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.807 EAL: request: mp_malloc_sync 00:07:29.807 EAL: No shared files mode enabled, IPC is disabled 00:07:29.807 EAL: Heap on socket 0 was shrunk by 66MB 00:07:30.068 EAL: Trying to obtain current memory policy. 00:07:30.068 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.068 EAL: Restoring previous memory policy: 4 00:07:30.068 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.068 EAL: request: mp_malloc_sync 00:07:30.068 EAL: No shared files mode enabled, IPC is disabled 00:07:30.068 EAL: Heap on socket 0 was expanded by 130MB 00:07:30.326 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.326 EAL: request: mp_malloc_sync 00:07:30.326 EAL: No shared files mode enabled, IPC is disabled 00:07:30.326 EAL: Heap on socket 0 was shrunk by 130MB 00:07:30.326 EAL: Trying to obtain current memory policy. 00:07:30.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:30.326 EAL: Restoring previous memory policy: 4 00:07:30.326 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.326 EAL: request: mp_malloc_sync 00:07:30.326 EAL: No shared files mode enabled, IPC is disabled 00:07:30.326 EAL: Heap on socket 0 was expanded by 258MB 00:07:30.893 EAL: Calling mem event callback 'spdk:(nil)' 00:07:30.893 EAL: request: mp_malloc_sync 00:07:30.893 EAL: No shared files mode enabled, IPC is disabled 00:07:30.893 EAL: Heap on socket 0 was shrunk by 258MB 00:07:31.152 EAL: Trying to obtain current memory policy. 00:07:31.152 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:31.152 EAL: Restoring previous memory policy: 4 00:07:31.152 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.152 EAL: request: mp_malloc_sync 00:07:31.152 EAL: No shared files mode enabled, IPC is disabled 00:07:31.152 EAL: Heap on socket 0 was expanded by 514MB 00:07:31.721 EAL: Calling mem event callback 'spdk:(nil)' 00:07:31.980 EAL: request: mp_malloc_sync 00:07:31.980 EAL: No shared files mode enabled, IPC is disabled 00:07:31.980 EAL: Heap on socket 0 was shrunk by 514MB 00:07:32.549 EAL: Trying to obtain current memory policy. 00:07:32.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:32.549 EAL: Restoring previous memory policy: 4 00:07:32.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:32.549 EAL: request: mp_malloc_sync 00:07:32.549 EAL: No shared files mode enabled, IPC is disabled 00:07:32.549 EAL: Heap on socket 0 was expanded by 1026MB 00:07:33.927 EAL: Calling mem event callback 'spdk:(nil)' 00:07:34.186 EAL: request: mp_malloc_sync 00:07:34.186 EAL: No shared files mode enabled, IPC is disabled 00:07:34.186 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:35.562 passed 00:07:35.562 00:07:35.562 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.562 suites 1 1 n/a 0 0 00:07:35.562 tests 2 2 2 0 0 00:07:35.562 asserts 5474 5474 5474 0 n/a 00:07:35.562 00:07:35.562 Elapsed time = 5.826 seconds 00:07:35.562 EAL: Calling mem event callback 'spdk:(nil)' 00:07:35.562 EAL: request: mp_malloc_sync 00:07:35.562 EAL: No shared files mode enabled, IPC is disabled 00:07:35.562 EAL: Heap on socket 0 was shrunk by 2MB 00:07:35.562 EAL: No shared files mode enabled, IPC is disabled 00:07:35.562 EAL: No shared files mode enabled, IPC is disabled 00:07:35.562 EAL: No shared files mode enabled, IPC is disabled 00:07:35.562 00:07:35.562 real 0m6.109s 00:07:35.562 user 0m5.323s 00:07:35.562 sys 0m0.654s 00:07:35.562 21:29:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.562 21:29:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.562 ************************************ 00:07:35.562 END TEST env_vtophys 00:07:35.562 ************************************ 00:07:35.562 21:29:55 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:35.562 21:29:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.562 21:29:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.562 21:29:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.562 ************************************ 00:07:35.562 START TEST env_pci 00:07:35.562 ************************************ 00:07:35.562 21:29:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:35.562 00:07:35.562 00:07:35.562 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.562 http://cunit.sourceforge.net/ 00:07:35.562 00:07:35.562 00:07:35.562 Suite: pci 00:07:35.562 Test: pci_hook ...[2024-12-06 21:29:55.902104] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60305 has claimed it 00:07:35.562 passed 00:07:35.562 00:07:35.562 EAL: Cannot find device (10000:00:01.0) 00:07:35.562 EAL: Failed to attach device on primary process 00:07:35.563 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.563 suites 1 1 n/a 0 0 00:07:35.563 tests 1 1 1 0 0 00:07:35.563 asserts 25 25 25 0 n/a 00:07:35.563 00:07:35.563 Elapsed time = 0.008 seconds 00:07:35.563 00:07:35.563 real 0m0.087s 00:07:35.563 user 0m0.047s 00:07:35.563 sys 0m0.040s 00:07:35.563 21:29:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.563 21:29:55 -- common/autotest_common.sh@10 -- # set +x 00:07:35.563 ************************************ 00:07:35.563 END TEST env_pci 00:07:35.563 ************************************ 00:07:35.563 21:29:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:35.563 21:29:55 -- env/env.sh@15 -- # uname 00:07:35.563 21:29:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:35.563 21:29:56 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:35.563 21:29:56 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:35.563 21:29:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:35.563 21:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.563 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.563 ************************************ 00:07:35.563 START TEST env_dpdk_post_init 00:07:35.563 ************************************ 00:07:35.563 21:29:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:35.822 EAL: Detected CPU lcores: 10 00:07:35.822 EAL: Detected NUMA nodes: 1 00:07:35.822 EAL: Detected static linkage of DPDK 00:07:35.822 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:35.822 EAL: Selected IOVA mode 'PA' 00:07:35.822 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:35.822 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:07:35.822 Starting DPDK initialization... 00:07:35.822 Starting SPDK post initialization... 00:07:35.822 SPDK NVMe probe 00:07:35.822 Attaching to 0000:00:06.0 00:07:35.822 Attached to 0000:00:06.0 00:07:35.822 Cleaning up... 00:07:35.822 00:07:35.822 real 0m0.254s 00:07:35.822 user 0m0.071s 00:07:35.822 sys 0m0.084s 00:07:35.822 21:29:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:35.822 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:35.822 ************************************ 00:07:35.822 END TEST env_dpdk_post_init 00:07:35.822 ************************************ 00:07:35.822 21:29:56 -- env/env.sh@26 -- # uname 00:07:35.822 21:29:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:35.822 21:29:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:35.822 21:29:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.822 21:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.822 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.081 ************************************ 00:07:36.081 START TEST env_mem_callbacks 00:07:36.081 ************************************ 00:07:36.081 21:29:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:36.081 EAL: Detected CPU lcores: 10 00:07:36.081 EAL: Detected NUMA nodes: 1 00:07:36.081 EAL: Detected static linkage of DPDK 00:07:36.081 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:36.081 EAL: Selected IOVA mode 'PA' 00:07:36.081 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:36.081 00:07:36.081 00:07:36.081 CUnit - A unit testing framework for C - Version 2.1-3 00:07:36.081 http://cunit.sourceforge.net/ 00:07:36.081 00:07:36.081 00:07:36.081 Suite: memory 00:07:36.081 Test: test ... 00:07:36.081 register 0x200000200000 2097152 00:07:36.081 malloc 3145728 00:07:36.081 register 0x200000400000 4194304 00:07:36.081 buf 0x2000004fffc0 len 3145728 PASSED 00:07:36.081 malloc 64 00:07:36.081 buf 0x2000004ffec0 len 64 PASSED 00:07:36.081 malloc 4194304 00:07:36.081 register 0x200000800000 6291456 00:07:36.081 buf 0x2000009fffc0 len 4194304 PASSED 00:07:36.081 free 0x2000004fffc0 3145728 00:07:36.081 free 0x2000004ffec0 64 00:07:36.081 unregister 0x200000400000 4194304 PASSED 00:07:36.081 free 0x2000009fffc0 4194304 00:07:36.081 unregister 0x200000800000 6291456 PASSED 00:07:36.081 malloc 8388608 00:07:36.081 register 0x200000400000 10485760 00:07:36.081 buf 0x2000005fffc0 len 8388608 PASSED 00:07:36.081 free 0x2000005fffc0 8388608 00:07:36.081 unregister 0x200000400000 10485760 PASSED 00:07:36.081 passed 00:07:36.081 00:07:36.081 Run Summary: Type Total Ran Passed Failed Inactive 00:07:36.081 suites 1 1 n/a 0 0 00:07:36.081 tests 1 1 1 0 0 00:07:36.081 asserts 15 15 15 0 n/a 00:07:36.081 00:07:36.082 Elapsed time = 0.057 seconds 00:07:36.341 00:07:36.341 real 0m0.259s 00:07:36.341 user 0m0.095s 00:07:36.341 sys 0m0.065s 00:07:36.341 21:29:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.341 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 ************************************ 00:07:36.341 END TEST env_mem_callbacks 00:07:36.341 ************************************ 00:07:36.341 00:07:36.341 real 0m7.559s 00:07:36.341 user 0m6.086s 00:07:36.341 sys 0m1.149s 00:07:36.341 21:29:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:36.341 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 ************************************ 00:07:36.341 END TEST env 00:07:36.341 ************************************ 00:07:36.341 21:29:56 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:36.341 21:29:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:36.341 21:29:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.341 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.341 ************************************ 00:07:36.341 START TEST rpc 00:07:36.341 ************************************ 00:07:36.341 21:29:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:36.341 * Looking for test storage... 00:07:36.341 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:36.341 21:29:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:36.341 21:29:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:36.341 21:29:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:36.656 21:29:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:36.656 21:29:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:36.656 21:29:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:36.656 21:29:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:36.656 21:29:56 -- scripts/common.sh@335 -- # IFS=.-: 00:07:36.656 21:29:56 -- scripts/common.sh@335 -- # read -ra ver1 00:07:36.656 21:29:56 -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.656 21:29:56 -- scripts/common.sh@336 -- # read -ra ver2 00:07:36.656 21:29:56 -- scripts/common.sh@337 -- # local 'op=<' 00:07:36.656 21:29:56 -- scripts/common.sh@339 -- # ver1_l=2 00:07:36.656 21:29:56 -- scripts/common.sh@340 -- # ver2_l=1 00:07:36.656 21:29:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:36.656 21:29:56 -- scripts/common.sh@343 -- # case "$op" in 00:07:36.656 21:29:56 -- scripts/common.sh@344 -- # : 1 00:07:36.656 21:29:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:36.656 21:29:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.656 21:29:56 -- scripts/common.sh@364 -- # decimal 1 00:07:36.656 21:29:56 -- scripts/common.sh@352 -- # local d=1 00:07:36.656 21:29:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.656 21:29:56 -- scripts/common.sh@354 -- # echo 1 00:07:36.656 21:29:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:36.656 21:29:56 -- scripts/common.sh@365 -- # decimal 2 00:07:36.656 21:29:56 -- scripts/common.sh@352 -- # local d=2 00:07:36.656 21:29:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.656 21:29:56 -- scripts/common.sh@354 -- # echo 2 00:07:36.656 21:29:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:36.656 21:29:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:36.656 21:29:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:36.656 21:29:56 -- scripts/common.sh@367 -- # return 0 00:07:36.656 21:29:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.656 21:29:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.656 --rc genhtml_branch_coverage=1 00:07:36.657 --rc genhtml_function_coverage=1 00:07:36.657 --rc genhtml_legend=1 00:07:36.657 --rc geninfo_all_blocks=1 00:07:36.657 --rc geninfo_unexecuted_blocks=1 00:07:36.657 00:07:36.657 ' 00:07:36.657 21:29:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.657 --rc genhtml_branch_coverage=1 00:07:36.657 --rc genhtml_function_coverage=1 00:07:36.657 --rc genhtml_legend=1 00:07:36.657 --rc geninfo_all_blocks=1 00:07:36.657 --rc geninfo_unexecuted_blocks=1 00:07:36.657 00:07:36.657 ' 00:07:36.657 21:29:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.657 --rc genhtml_branch_coverage=1 00:07:36.657 --rc genhtml_function_coverage=1 00:07:36.657 --rc genhtml_legend=1 00:07:36.657 --rc geninfo_all_blocks=1 00:07:36.657 --rc geninfo_unexecuted_blocks=1 00:07:36.657 00:07:36.657 ' 00:07:36.657 21:29:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:36.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.657 --rc genhtml_branch_coverage=1 00:07:36.657 --rc genhtml_function_coverage=1 00:07:36.657 --rc genhtml_legend=1 00:07:36.657 --rc geninfo_all_blocks=1 00:07:36.657 --rc geninfo_unexecuted_blocks=1 00:07:36.657 00:07:36.657 ' 00:07:36.657 21:29:56 -- rpc/rpc.sh@65 -- # spdk_pid=60431 00:07:36.657 21:29:56 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.657 21:29:56 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:36.657 21:29:56 -- rpc/rpc.sh@67 -- # waitforlisten 60431 00:07:36.657 21:29:56 -- common/autotest_common.sh@829 -- # '[' -z 60431 ']' 00:07:36.657 21:29:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.657 21:29:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.657 21:29:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.657 21:29:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.657 21:29:56 -- common/autotest_common.sh@10 -- # set +x 00:07:36.657 [2024-12-06 21:29:56.941177] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:36.657 [2024-12-06 21:29:56.941343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60431 ] 00:07:36.931 [2024-12-06 21:29:57.108655] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.931 [2024-12-06 21:29:57.271446] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:36.931 [2024-12-06 21:29:57.271687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:36.931 [2024-12-06 21:29:57.271709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60431' to capture a snapshot of events at runtime. 00:07:36.931 [2024-12-06 21:29:57.271720] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60431 for offline analysis/debug. 00:07:36.931 [2024-12-06 21:29:57.271763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.307 21:29:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.307 21:29:58 -- common/autotest_common.sh@862 -- # return 0 00:07:38.307 21:29:58 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:38.307 21:29:58 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:38.307 21:29:58 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:38.307 21:29:58 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:38.307 21:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.307 21:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.307 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.307 ************************************ 00:07:38.307 START TEST rpc_integrity 00:07:38.307 ************************************ 00:07:38.307 21:29:58 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:38.308 21:29:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:38.308 21:29:58 -- rpc/rpc.sh@13 -- # jq length 00:07:38.308 21:29:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:38.308 21:29:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:38.308 21:29:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:38.308 { 00:07:38.308 "name": "Malloc0", 00:07:38.308 "aliases": [ 00:07:38.308 "95e6db3e-1bbc-4fbe-87f2-ab1325f6f5da" 00:07:38.308 ], 00:07:38.308 "product_name": "Malloc disk", 00:07:38.308 "block_size": 512, 00:07:38.308 "num_blocks": 16384, 00:07:38.308 "uuid": "95e6db3e-1bbc-4fbe-87f2-ab1325f6f5da", 00:07:38.308 "assigned_rate_limits": { 00:07:38.308 "rw_ios_per_sec": 0, 00:07:38.308 "rw_mbytes_per_sec": 0, 00:07:38.308 "r_mbytes_per_sec": 0, 00:07:38.308 "w_mbytes_per_sec": 0 00:07:38.308 }, 00:07:38.308 "claimed": false, 00:07:38.308 "zoned": false, 00:07:38.308 "supported_io_types": { 00:07:38.308 "read": true, 00:07:38.308 "write": true, 00:07:38.308 "unmap": true, 00:07:38.308 "write_zeroes": true, 00:07:38.308 "flush": true, 00:07:38.308 "reset": true, 00:07:38.308 "compare": false, 00:07:38.308 "compare_and_write": false, 00:07:38.308 "abort": true, 00:07:38.308 "nvme_admin": false, 00:07:38.308 "nvme_io": false 00:07:38.308 }, 00:07:38.308 "memory_domains": [ 00:07:38.308 { 00:07:38.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.308 "dma_device_type": 2 00:07:38.308 } 00:07:38.308 ], 00:07:38.308 "driver_specific": {} 00:07:38.308 } 00:07:38.308 ]' 00:07:38.308 21:29:58 -- rpc/rpc.sh@17 -- # jq length 00:07:38.308 21:29:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:38.308 21:29:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 [2024-12-06 21:29:58.596744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:38.308 [2024-12-06 21:29:58.596883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.308 [2024-12-06 21:29:58.596910] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006f80 00:07:38.308 [2024-12-06 21:29:58.596930] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.308 [2024-12-06 21:29:58.599516] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.308 [2024-12-06 21:29:58.599561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:38.308 Passthru0 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:38.308 { 00:07:38.308 "name": "Malloc0", 00:07:38.308 "aliases": [ 00:07:38.308 "95e6db3e-1bbc-4fbe-87f2-ab1325f6f5da" 00:07:38.308 ], 00:07:38.308 "product_name": "Malloc disk", 00:07:38.308 "block_size": 512, 00:07:38.308 "num_blocks": 16384, 00:07:38.308 "uuid": "95e6db3e-1bbc-4fbe-87f2-ab1325f6f5da", 00:07:38.308 "assigned_rate_limits": { 00:07:38.308 "rw_ios_per_sec": 0, 00:07:38.308 "rw_mbytes_per_sec": 0, 00:07:38.308 "r_mbytes_per_sec": 0, 00:07:38.308 "w_mbytes_per_sec": 0 00:07:38.308 }, 00:07:38.308 "claimed": true, 00:07:38.308 "claim_type": "exclusive_write", 00:07:38.308 "zoned": false, 00:07:38.308 "supported_io_types": { 00:07:38.308 "read": true, 00:07:38.308 "write": true, 00:07:38.308 "unmap": true, 00:07:38.308 "write_zeroes": true, 00:07:38.308 "flush": true, 00:07:38.308 "reset": true, 00:07:38.308 "compare": false, 00:07:38.308 "compare_and_write": false, 00:07:38.308 "abort": true, 00:07:38.308 "nvme_admin": false, 00:07:38.308 "nvme_io": false 00:07:38.308 }, 00:07:38.308 "memory_domains": [ 00:07:38.308 { 00:07:38.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.308 "dma_device_type": 2 00:07:38.308 } 00:07:38.308 ], 00:07:38.308 "driver_specific": {} 00:07:38.308 }, 00:07:38.308 { 00:07:38.308 "name": "Passthru0", 00:07:38.308 "aliases": [ 00:07:38.308 "a80da8fb-df51-5b9f-9cac-a60cee552aa2" 00:07:38.308 ], 00:07:38.308 "product_name": "passthru", 00:07:38.308 "block_size": 512, 00:07:38.308 "num_blocks": 16384, 00:07:38.308 "uuid": "a80da8fb-df51-5b9f-9cac-a60cee552aa2", 00:07:38.308 "assigned_rate_limits": { 00:07:38.308 "rw_ios_per_sec": 0, 00:07:38.308 "rw_mbytes_per_sec": 0, 00:07:38.308 "r_mbytes_per_sec": 0, 00:07:38.308 "w_mbytes_per_sec": 0 00:07:38.308 }, 00:07:38.308 "claimed": false, 00:07:38.308 "zoned": false, 00:07:38.308 "supported_io_types": { 00:07:38.308 "read": true, 00:07:38.308 "write": true, 00:07:38.308 "unmap": true, 00:07:38.308 "write_zeroes": true, 00:07:38.308 "flush": true, 00:07:38.308 "reset": true, 00:07:38.308 "compare": false, 00:07:38.308 "compare_and_write": false, 00:07:38.308 "abort": true, 00:07:38.308 "nvme_admin": false, 00:07:38.308 "nvme_io": false 00:07:38.308 }, 00:07:38.308 "memory_domains": [ 00:07:38.308 { 00:07:38.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.308 "dma_device_type": 2 00:07:38.308 } 00:07:38.308 ], 00:07:38.308 "driver_specific": { 00:07:38.308 "passthru": { 00:07:38.308 "name": "Passthru0", 00:07:38.308 "base_bdev_name": "Malloc0" 00:07:38.308 } 00:07:38.308 } 00:07:38.308 } 00:07:38.308 ]' 00:07:38.308 21:29:58 -- rpc/rpc.sh@21 -- # jq length 00:07:38.308 21:29:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:38.308 21:29:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:38.308 21:29:58 -- rpc/rpc.sh@26 -- # jq length 00:07:38.308 21:29:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:38.308 00:07:38.308 real 0m0.165s 00:07:38.308 user 0m0.047s 00:07:38.308 sys 0m0.038s 00:07:38.308 21:29:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.308 ************************************ 00:07:38.308 END TEST rpc_integrity 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 ************************************ 00:07:38.308 21:29:58 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:38.308 21:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.308 21:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 ************************************ 00:07:38.308 START TEST rpc_plugins 00:07:38.308 ************************************ 00:07:38.308 21:29:58 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:07:38.308 21:29:58 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:38.308 21:29:58 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:38.308 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.308 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.308 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.308 21:29:58 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:38.308 { 00:07:38.308 "name": "Malloc1", 00:07:38.308 "aliases": [ 00:07:38.308 "212b4156-4489-485f-ac25-fbd80c976e37" 00:07:38.308 ], 00:07:38.308 "product_name": "Malloc disk", 00:07:38.308 "block_size": 4096, 00:07:38.308 "num_blocks": 256, 00:07:38.308 "uuid": "212b4156-4489-485f-ac25-fbd80c976e37", 00:07:38.308 "assigned_rate_limits": { 00:07:38.308 "rw_ios_per_sec": 0, 00:07:38.308 "rw_mbytes_per_sec": 0, 00:07:38.308 "r_mbytes_per_sec": 0, 00:07:38.308 "w_mbytes_per_sec": 0 00:07:38.308 }, 00:07:38.308 "claimed": false, 00:07:38.308 "zoned": false, 00:07:38.308 "supported_io_types": { 00:07:38.308 "read": true, 00:07:38.308 "write": true, 00:07:38.308 "unmap": true, 00:07:38.308 "write_zeroes": true, 00:07:38.308 "flush": true, 00:07:38.308 "reset": true, 00:07:38.308 "compare": false, 00:07:38.309 "compare_and_write": false, 00:07:38.309 "abort": true, 00:07:38.309 "nvme_admin": false, 00:07:38.309 "nvme_io": false 00:07:38.309 }, 00:07:38.309 "memory_domains": [ 00:07:38.309 { 00:07:38.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.309 "dma_device_type": 2 00:07:38.309 } 00:07:38.309 ], 00:07:38.309 "driver_specific": {} 00:07:38.309 } 00:07:38.309 ]' 00:07:38.309 21:29:58 -- rpc/rpc.sh@32 -- # jq length 00:07:38.309 21:29:58 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:38.309 21:29:58 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:38.309 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.309 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.309 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.309 21:29:58 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:38.309 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.309 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.568 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.568 21:29:58 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:38.568 21:29:58 -- rpc/rpc.sh@36 -- # jq length 00:07:38.568 21:29:58 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:38.568 00:07:38.568 real 0m0.074s 00:07:38.568 user 0m0.026s 00:07:38.568 sys 0m0.015s 00:07:38.568 21:29:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.568 ************************************ 00:07:38.568 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.568 END TEST rpc_plugins 00:07:38.568 ************************************ 00:07:38.568 21:29:58 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:38.568 21:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.568 21:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.568 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.568 ************************************ 00:07:38.568 START TEST rpc_trace_cmd_test 00:07:38.568 ************************************ 00:07:38.568 21:29:58 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:07:38.568 21:29:58 -- rpc/rpc.sh@40 -- # local info 00:07:38.568 21:29:58 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:38.568 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.568 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.568 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.568 21:29:58 -- rpc/rpc.sh@42 -- # info='{ 00:07:38.568 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60431", 00:07:38.568 "tpoint_group_mask": "0x8", 00:07:38.568 "iscsi_conn": { 00:07:38.568 "mask": "0x2", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "scsi": { 00:07:38.568 "mask": "0x4", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "bdev": { 00:07:38.568 "mask": "0x8", 00:07:38.568 "tpoint_mask": "0xffffffffffffffff" 00:07:38.568 }, 00:07:38.568 "nvmf_rdma": { 00:07:38.568 "mask": "0x10", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "nvmf_tcp": { 00:07:38.568 "mask": "0x20", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "ftl": { 00:07:38.568 "mask": "0x40", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "blobfs": { 00:07:38.568 "mask": "0x80", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "dsa": { 00:07:38.568 "mask": "0x200", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "thread": { 00:07:38.568 "mask": "0x400", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "nvme_pcie": { 00:07:38.568 "mask": "0x800", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "iaa": { 00:07:38.568 "mask": "0x1000", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "nvme_tcp": { 00:07:38.568 "mask": "0x2000", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 }, 00:07:38.568 "bdev_nvme": { 00:07:38.568 "mask": "0x4000", 00:07:38.568 "tpoint_mask": "0x0" 00:07:38.568 } 00:07:38.568 }' 00:07:38.568 21:29:58 -- rpc/rpc.sh@43 -- # jq length 00:07:38.568 21:29:58 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:38.568 21:29:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:38.568 21:29:58 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:38.568 21:29:58 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:38.568 21:29:58 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:38.568 21:29:58 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:38.568 21:29:58 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:38.568 21:29:58 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:38.568 21:29:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:38.568 00:07:38.568 real 0m0.060s 00:07:38.568 user 0m0.022s 00:07:38.568 sys 0m0.031s 00:07:38.569 21:29:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.569 ************************************ 00:07:38.569 END TEST rpc_trace_cmd_test 00:07:38.569 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 ************************************ 00:07:38.569 21:29:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:38.569 21:29:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:38.569 21:29:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:38.569 21:29:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.569 21:29:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.569 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 ************************************ 00:07:38.569 START TEST rpc_daemon_integrity 00:07:38.569 ************************************ 00:07:38.569 21:29:58 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:07:38.569 21:29:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:38.569 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.569 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 21:29:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.569 21:29:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:38.569 21:29:58 -- rpc/rpc.sh@13 -- # jq length 00:07:38.569 21:29:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:38.569 21:29:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:38.569 21:29:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.569 21:29:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.569 21:29:59 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:38.569 21:29:59 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:38.569 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.569 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.569 21:29:59 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:38.569 { 00:07:38.569 "name": "Malloc2", 00:07:38.569 "aliases": [ 00:07:38.569 "81a4b945-ca36-4aaf-bb07-e76bb916d7c3" 00:07:38.569 ], 00:07:38.569 "product_name": "Malloc disk", 00:07:38.569 "block_size": 512, 00:07:38.569 "num_blocks": 16384, 00:07:38.569 "uuid": "81a4b945-ca36-4aaf-bb07-e76bb916d7c3", 00:07:38.569 "assigned_rate_limits": { 00:07:38.569 "rw_ios_per_sec": 0, 00:07:38.569 "rw_mbytes_per_sec": 0, 00:07:38.569 "r_mbytes_per_sec": 0, 00:07:38.569 "w_mbytes_per_sec": 0 00:07:38.569 }, 00:07:38.569 "claimed": false, 00:07:38.569 "zoned": false, 00:07:38.569 "supported_io_types": { 00:07:38.569 "read": true, 00:07:38.569 "write": true, 00:07:38.569 "unmap": true, 00:07:38.569 "write_zeroes": true, 00:07:38.569 "flush": true, 00:07:38.569 "reset": true, 00:07:38.569 "compare": false, 00:07:38.569 "compare_and_write": false, 00:07:38.569 "abort": true, 00:07:38.569 "nvme_admin": false, 00:07:38.569 "nvme_io": false 00:07:38.569 }, 00:07:38.569 "memory_domains": [ 00:07:38.569 { 00:07:38.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.569 "dma_device_type": 2 00:07:38.569 } 00:07:38.569 ], 00:07:38.569 "driver_specific": {} 00:07:38.569 } 00:07:38.569 ]' 00:07:38.569 21:29:59 -- rpc/rpc.sh@17 -- # jq length 00:07:38.569 21:29:59 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:38.569 21:29:59 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:38.569 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.569 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.569 [2024-12-06 21:29:59.037758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:38.569 [2024-12-06 21:29:59.037895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:38.569 [2024-12-06 21:29:59.037918] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:07:38.569 [2024-12-06 21:29:59.037934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:38.569 [2024-12-06 21:29:59.040565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:38.569 [2024-12-06 21:29:59.040611] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:38.569 Passthru0 00:07:38.569 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.569 21:29:59 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:38.569 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.569 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.828 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.828 21:29:59 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:38.828 { 00:07:38.828 "name": "Malloc2", 00:07:38.828 "aliases": [ 00:07:38.828 "81a4b945-ca36-4aaf-bb07-e76bb916d7c3" 00:07:38.828 ], 00:07:38.828 "product_name": "Malloc disk", 00:07:38.828 "block_size": 512, 00:07:38.828 "num_blocks": 16384, 00:07:38.828 "uuid": "81a4b945-ca36-4aaf-bb07-e76bb916d7c3", 00:07:38.828 "assigned_rate_limits": { 00:07:38.828 "rw_ios_per_sec": 0, 00:07:38.828 "rw_mbytes_per_sec": 0, 00:07:38.828 "r_mbytes_per_sec": 0, 00:07:38.828 "w_mbytes_per_sec": 0 00:07:38.828 }, 00:07:38.828 "claimed": true, 00:07:38.828 "claim_type": "exclusive_write", 00:07:38.828 "zoned": false, 00:07:38.828 "supported_io_types": { 00:07:38.828 "read": true, 00:07:38.828 "write": true, 00:07:38.828 "unmap": true, 00:07:38.828 "write_zeroes": true, 00:07:38.828 "flush": true, 00:07:38.828 "reset": true, 00:07:38.828 "compare": false, 00:07:38.828 "compare_and_write": false, 00:07:38.828 "abort": true, 00:07:38.828 "nvme_admin": false, 00:07:38.828 "nvme_io": false 00:07:38.828 }, 00:07:38.828 "memory_domains": [ 00:07:38.828 { 00:07:38.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.828 "dma_device_type": 2 00:07:38.828 } 00:07:38.828 ], 00:07:38.828 "driver_specific": {} 00:07:38.828 }, 00:07:38.828 { 00:07:38.828 "name": "Passthru0", 00:07:38.828 "aliases": [ 00:07:38.828 "75304fe9-8e30-5c8d-8084-c7a5d2df03b8" 00:07:38.828 ], 00:07:38.828 "product_name": "passthru", 00:07:38.828 "block_size": 512, 00:07:38.828 "num_blocks": 16384, 00:07:38.828 "uuid": "75304fe9-8e30-5c8d-8084-c7a5d2df03b8", 00:07:38.828 "assigned_rate_limits": { 00:07:38.828 "rw_ios_per_sec": 0, 00:07:38.828 "rw_mbytes_per_sec": 0, 00:07:38.828 "r_mbytes_per_sec": 0, 00:07:38.828 "w_mbytes_per_sec": 0 00:07:38.828 }, 00:07:38.828 "claimed": false, 00:07:38.828 "zoned": false, 00:07:38.828 "supported_io_types": { 00:07:38.828 "read": true, 00:07:38.828 "write": true, 00:07:38.828 "unmap": true, 00:07:38.828 "write_zeroes": true, 00:07:38.828 "flush": true, 00:07:38.828 "reset": true, 00:07:38.828 "compare": false, 00:07:38.828 "compare_and_write": false, 00:07:38.828 "abort": true, 00:07:38.828 "nvme_admin": false, 00:07:38.828 "nvme_io": false 00:07:38.828 }, 00:07:38.828 "memory_domains": [ 00:07:38.828 { 00:07:38.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.828 "dma_device_type": 2 00:07:38.828 } 00:07:38.828 ], 00:07:38.828 "driver_specific": { 00:07:38.828 "passthru": { 00:07:38.828 "name": "Passthru0", 00:07:38.828 "base_bdev_name": "Malloc2" 00:07:38.828 } 00:07:38.828 } 00:07:38.828 } 00:07:38.828 ]' 00:07:38.828 21:29:59 -- rpc/rpc.sh@21 -- # jq length 00:07:38.828 21:29:59 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:38.828 21:29:59 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:38.829 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.829 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.829 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.829 21:29:59 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:38.829 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.829 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.829 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.829 21:29:59 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:38.829 21:29:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.829 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.829 21:29:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.829 21:29:59 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:38.829 21:29:59 -- rpc/rpc.sh@26 -- # jq length 00:07:38.829 21:29:59 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:38.829 00:07:38.829 real 0m0.169s 00:07:38.829 user 0m0.055s 00:07:38.829 sys 0m0.030s 00:07:38.829 21:29:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.829 ************************************ 00:07:38.829 END TEST rpc_daemon_integrity 00:07:38.829 21:29:59 -- common/autotest_common.sh@10 -- # set +x 00:07:38.829 ************************************ 00:07:38.829 21:29:59 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:38.829 21:29:59 -- rpc/rpc.sh@84 -- # killprocess 60431 00:07:38.829 21:29:59 -- common/autotest_common.sh@936 -- # '[' -z 60431 ']' 00:07:38.829 21:29:59 -- common/autotest_common.sh@940 -- # kill -0 60431 00:07:38.829 21:29:59 -- common/autotest_common.sh@941 -- # uname 00:07:38.829 21:29:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:38.829 21:29:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60431 00:07:38.829 21:29:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:38.829 21:29:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:38.829 killing process with pid 60431 00:07:38.829 21:29:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60431' 00:07:38.829 21:29:59 -- common/autotest_common.sh@955 -- # kill 60431 00:07:38.829 21:29:59 -- common/autotest_common.sh@960 -- # wait 60431 00:07:40.732 00:07:40.732 real 0m4.474s 00:07:40.732 user 0m4.718s 00:07:40.732 sys 0m0.787s 00:07:40.732 21:30:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.732 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:40.732 ************************************ 00:07:40.732 END TEST rpc 00:07:40.732 ************************************ 00:07:40.732 21:30:01 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:40.732 21:30:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.732 21:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.732 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:40.732 ************************************ 00:07:40.732 START TEST rpc_client 00:07:40.732 ************************************ 00:07:40.732 21:30:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:40.992 * Looking for test storage... 00:07:40.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:40.992 21:30:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:40.992 21:30:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:40.992 21:30:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:40.992 21:30:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:40.992 21:30:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:40.992 21:30:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:40.992 21:30:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:40.992 21:30:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:40.992 21:30:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.992 21:30:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:40.992 21:30:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:40.992 21:30:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:40.992 21:30:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:40.992 21:30:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:40.992 21:30:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:40.992 21:30:01 -- scripts/common.sh@344 -- # : 1 00:07:40.992 21:30:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:40.992 21:30:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.992 21:30:01 -- scripts/common.sh@364 -- # decimal 1 00:07:40.992 21:30:01 -- scripts/common.sh@352 -- # local d=1 00:07:40.992 21:30:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.992 21:30:01 -- scripts/common.sh@354 -- # echo 1 00:07:40.992 21:30:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:40.992 21:30:01 -- scripts/common.sh@365 -- # decimal 2 00:07:40.992 21:30:01 -- scripts/common.sh@352 -- # local d=2 00:07:40.992 21:30:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.992 21:30:01 -- scripts/common.sh@354 -- # echo 2 00:07:40.992 21:30:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:40.992 21:30:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:40.992 21:30:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:40.992 21:30:01 -- scripts/common.sh@367 -- # return 0 00:07:40.992 21:30:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.992 --rc genhtml_branch_coverage=1 00:07:40.992 --rc genhtml_function_coverage=1 00:07:40.992 --rc genhtml_legend=1 00:07:40.992 --rc geninfo_all_blocks=1 00:07:40.992 --rc geninfo_unexecuted_blocks=1 00:07:40.992 00:07:40.992 ' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.992 --rc genhtml_branch_coverage=1 00:07:40.992 --rc genhtml_function_coverage=1 00:07:40.992 --rc genhtml_legend=1 00:07:40.992 --rc geninfo_all_blocks=1 00:07:40.992 --rc geninfo_unexecuted_blocks=1 00:07:40.992 00:07:40.992 ' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.992 --rc genhtml_branch_coverage=1 00:07:40.992 --rc genhtml_function_coverage=1 00:07:40.992 --rc genhtml_legend=1 00:07:40.992 --rc geninfo_all_blocks=1 00:07:40.992 --rc geninfo_unexecuted_blocks=1 00:07:40.992 00:07:40.992 ' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:40.992 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.992 --rc genhtml_branch_coverage=1 00:07:40.992 --rc genhtml_function_coverage=1 00:07:40.992 --rc genhtml_legend=1 00:07:40.992 --rc geninfo_all_blocks=1 00:07:40.992 --rc geninfo_unexecuted_blocks=1 00:07:40.992 00:07:40.992 ' 00:07:40.992 21:30:01 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:40.992 OK 00:07:40.992 21:30:01 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:40.992 00:07:40.992 real 0m0.249s 00:07:40.992 user 0m0.139s 00:07:40.992 sys 0m0.124s 00:07:40.992 21:30:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.992 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:40.992 ************************************ 00:07:40.992 END TEST rpc_client 00:07:40.992 ************************************ 00:07:40.992 21:30:01 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:40.992 21:30:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:40.992 21:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.992 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.251 ************************************ 00:07:41.251 START TEST json_config 00:07:41.251 ************************************ 00:07:41.251 21:30:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:41.251 21:30:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:41.251 21:30:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:41.251 21:30:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:41.251 21:30:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:41.251 21:30:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:41.251 21:30:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:41.251 21:30:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:41.251 21:30:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:41.251 21:30:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:41.251 21:30:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.251 21:30:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:41.251 21:30:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:41.251 21:30:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:41.251 21:30:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:41.251 21:30:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:41.251 21:30:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:41.251 21:30:01 -- scripts/common.sh@344 -- # : 1 00:07:41.251 21:30:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:41.251 21:30:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.251 21:30:01 -- scripts/common.sh@364 -- # decimal 1 00:07:41.251 21:30:01 -- scripts/common.sh@352 -- # local d=1 00:07:41.251 21:30:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.251 21:30:01 -- scripts/common.sh@354 -- # echo 1 00:07:41.251 21:30:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:41.251 21:30:01 -- scripts/common.sh@365 -- # decimal 2 00:07:41.251 21:30:01 -- scripts/common.sh@352 -- # local d=2 00:07:41.251 21:30:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.251 21:30:01 -- scripts/common.sh@354 -- # echo 2 00:07:41.251 21:30:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:41.251 21:30:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:41.251 21:30:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:41.251 21:30:01 -- scripts/common.sh@367 -- # return 0 00:07:41.251 21:30:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.251 21:30:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.251 --rc geninfo_all_blocks=1 00:07:41.251 --rc geninfo_unexecuted_blocks=1 00:07:41.251 00:07:41.251 ' 00:07:41.251 21:30:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.251 --rc geninfo_all_blocks=1 00:07:41.251 --rc geninfo_unexecuted_blocks=1 00:07:41.251 00:07:41.251 ' 00:07:41.251 21:30:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.251 --rc geninfo_all_blocks=1 00:07:41.251 --rc geninfo_unexecuted_blocks=1 00:07:41.251 00:07:41.251 ' 00:07:41.251 21:30:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:41.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.251 --rc genhtml_branch_coverage=1 00:07:41.251 --rc genhtml_function_coverage=1 00:07:41.251 --rc genhtml_legend=1 00:07:41.252 --rc geninfo_all_blocks=1 00:07:41.252 --rc geninfo_unexecuted_blocks=1 00:07:41.252 00:07:41.252 ' 00:07:41.252 21:30:01 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:41.252 21:30:01 -- nvmf/common.sh@7 -- # uname -s 00:07:41.252 21:30:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:41.252 21:30:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:41.252 21:30:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:41.252 21:30:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:41.252 21:30:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:41.252 21:30:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:41.252 21:30:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:41.252 21:30:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:41.252 21:30:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:41.252 21:30:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:41.252 21:30:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:07:41.252 21:30:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:07:41.252 21:30:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:41.252 21:30:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:41.252 21:30:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:41.252 21:30:01 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:41.252 21:30:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:41.252 21:30:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:41.252 21:30:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:41.252 21:30:01 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.252 21:30:01 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.252 21:30:01 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.252 21:30:01 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.252 21:30:01 -- paths/export.sh@6 -- # export PATH 00:07:41.252 21:30:01 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:41.252 21:30:01 -- nvmf/common.sh@46 -- # : 0 00:07:41.252 21:30:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:41.252 21:30:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:41.252 21:30:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:41.252 21:30:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:41.252 21:30:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:41.252 21:30:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:41.252 21:30:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:41.252 21:30:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:41.252 21:30:01 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:41.252 21:30:01 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:07:41.252 21:30:01 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:41.252 21:30:01 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:41.252 21:30:01 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:41.252 21:30:01 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:41.252 21:30:01 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:41.252 21:30:01 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:07:41.252 21:30:01 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:41.252 21:30:01 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:41.252 21:30:01 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:41.252 INFO: JSON configuration test init 00:07:41.252 21:30:01 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:41.252 21:30:01 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:41.252 21:30:01 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:41.252 21:30:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.252 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.252 21:30:01 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:41.252 21:30:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.252 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.252 21:30:01 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:41.252 21:30:01 -- json_config/json_config.sh@98 -- # local app=target 00:07:41.252 21:30:01 -- json_config/json_config.sh@99 -- # shift 00:07:41.252 21:30:01 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:41.252 21:30:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:41.252 21:30:01 -- json_config/json_config.sh@111 -- # app_pid[$app]=60709 00:07:41.252 Waiting for target to run... 00:07:41.252 21:30:01 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:41.252 21:30:01 -- json_config/json_config.sh@114 -- # waitforlisten 60709 /var/tmp/spdk_tgt.sock 00:07:41.252 21:30:01 -- common/autotest_common.sh@829 -- # '[' -z 60709 ']' 00:07:41.252 21:30:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:41.252 21:30:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:41.252 21:30:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:41.252 21:30:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.252 21:30:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.252 21:30:01 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:41.252 [2024-12-06 21:30:01.737069] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.252 [2024-12-06 21:30:01.737248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60709 ] 00:07:41.821 [2024-12-06 21:30:02.092948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.821 [2024-12-06 21:30:02.273890] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.821 [2024-12-06 21:30:02.274167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.389 21:30:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.389 21:30:02 -- common/autotest_common.sh@862 -- # return 0 00:07:42.389 00:07:42.389 21:30:02 -- json_config/json_config.sh@115 -- # echo '' 00:07:42.389 21:30:02 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:42.389 21:30:02 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:42.389 21:30:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:42.389 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:07:42.389 21:30:02 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:42.389 21:30:02 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:42.389 21:30:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.389 21:30:02 -- common/autotest_common.sh@10 -- # set +x 00:07:42.389 21:30:02 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:42.389 21:30:02 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:42.389 21:30:02 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:43.325 21:30:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:43.325 21:30:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:43.325 21:30:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.325 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.325 21:30:03 -- json_config/json_config.sh@48 -- # local ret=0 00:07:43.325 21:30:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:43.325 21:30:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:43.325 21:30:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:43.325 21:30:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:43.325 21:30:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:43.583 21:30:03 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:07:43.583 21:30:03 -- json_config/json_config.sh@51 -- # local get_types 00:07:43.583 21:30:03 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:43.583 21:30:03 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:43.583 21:30:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.583 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.583 21:30:03 -- json_config/json_config.sh@58 -- # return 0 00:07:43.583 21:30:03 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:43.583 21:30:03 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:43.583 21:30:03 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:43.583 21:30:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.583 21:30:03 -- common/autotest_common.sh@10 -- # set +x 00:07:43.583 21:30:03 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:43.583 21:30:03 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:43.583 21:30:03 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:43.583 21:30:03 -- json_config/json_config.sh@164 -- # get_notifications 00:07:43.583 21:30:03 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:43.583 21:30:03 -- json_config/json_config.sh@64 -- # IFS=: 00:07:43.583 21:30:03 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:43.583 21:30:03 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:43.583 21:30:03 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:43.583 21:30:03 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:43.843 21:30:04 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:43.843 21:30:04 -- json_config/json_config.sh@64 -- # IFS=: 00:07:43.843 21:30:04 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:43.843 21:30:04 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:43.843 21:30:04 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:43.843 21:30:04 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:43.843 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:44.101 Nvme0n1p0 Nvme0n1p1 00:07:44.101 21:30:04 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:44.101 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:44.359 [2024-12-06 21:30:04.702355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:44.359 [2024-12-06 21:30:04.702514] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:44.359 00:07:44.359 21:30:04 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:44.359 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:44.618 Malloc3 00:07:44.618 21:30:04 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:44.618 21:30:04 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:44.875 [2024-12-06 21:30:05.196046] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.875 [2024-12-06 21:30:05.196160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.875 [2024-12-06 21:30:05.196194] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007e80 00:07:44.875 [2024-12-06 21:30:05.196213] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.875 [2024-12-06 21:30:05.199312] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.875 [2024-12-06 21:30:05.199359] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:44.875 PTBdevFromMalloc3 00:07:44.875 21:30:05 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:44.875 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:45.132 Null0 00:07:45.132 21:30:05 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:45.132 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:45.132 Malloc0 00:07:45.132 21:30:05 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:45.132 21:30:05 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:45.390 Malloc1 00:07:45.390 21:30:05 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:45.390 21:30:05 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:45.647 102400+0 records in 00:07:45.647 102400+0 records out 00:07:45.647 104857600 bytes (105 MB, 100 MiB) copied, 0.226632 s, 463 MB/s 00:07:45.647 21:30:06 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:45.647 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:45.904 aio_disk 00:07:45.904 21:30:06 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:45.904 21:30:06 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:45.904 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:46.163 0eb551df-174b-40a4-abed-a7a749393f9f 00:07:46.163 21:30:06 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:46.163 21:30:06 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:46.163 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:46.422 21:30:06 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:46.422 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:46.681 21:30:06 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:46.681 21:30:06 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:46.941 21:30:07 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:46.941 21:30:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:46.941 21:30:07 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:46.941 21:30:07 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:46.941 21:30:07 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 00:07:46.941 21:30:07 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:46.941 21:30:07 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:46.941 21:30:07 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:46.941 21:30:07 -- json_config/json_config.sh@74 -- # sort 00:07:46.941 21:30:07 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 00:07:46.941 21:30:07 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:46.941 21:30:07 -- json_config/json_config.sh@75 -- # get_notifications 00:07:46.941 21:30:07 -- json_config/json_config.sh@75 -- # sort 00:07:46.941 21:30:07 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:46.941 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:46.941 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:46.941 21:30:07 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:46.941 21:30:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:46.941 21:30:07 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@65 -- # echo bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # IFS=: 00:07:47.201 21:30:07 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:47.201 21:30:07 -- json_config/json_config.sh@77 -- # [[ bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\3\2\9\d\e\9\5\b\-\c\4\3\e\-\4\4\b\c\-\b\9\f\1\-\b\0\b\b\4\f\4\4\2\5\5\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\6\6\f\d\c\4\9\1\-\f\b\c\8\-\4\a\b\0\-\9\5\7\d\-\0\5\6\d\e\3\6\2\6\6\6\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\3\a\0\b\d\5\3\-\f\8\4\1\-\4\8\4\4\-\8\b\5\7\-\4\3\2\f\b\d\6\e\f\4\4\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\1\a\a\1\5\b\6\-\a\a\c\5\-\4\a\7\9\-\b\5\6\1\-\b\e\a\5\4\7\d\1\c\b\d\9 ]] 00:07:47.201 21:30:07 -- json_config/json_config.sh@89 -- # cat 00:07:47.201 21:30:07 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 00:07:47.201 Expected events matched: 00:07:47.201 bdev_register:329de95b-c43e-44bc-b9f1-b0bb4f442553 00:07:47.201 bdev_register:66fdc491-fbc8-4ab0-957d-056de362666e 00:07:47.201 bdev_register:93a0bd53-f841-4844-8b57-432fbd6ef44e 00:07:47.201 bdev_register:Malloc0 00:07:47.201 bdev_register:Malloc0p0 00:07:47.201 bdev_register:Malloc0p1 00:07:47.201 bdev_register:Malloc0p2 00:07:47.201 bdev_register:Malloc1 00:07:47.201 bdev_register:Malloc3 00:07:47.201 bdev_register:Null0 00:07:47.201 bdev_register:Nvme0n1 00:07:47.201 bdev_register:Nvme0n1p0 00:07:47.201 bdev_register:Nvme0n1p1 00:07:47.201 bdev_register:PTBdevFromMalloc3 00:07:47.201 bdev_register:aio_disk 00:07:47.201 bdev_register:e1aa15b6-aac5-4a79-b561-bea547d1cbd9 00:07:47.201 21:30:07 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:47.201 21:30:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.201 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.201 21:30:07 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:47.201 21:30:07 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:47.201 21:30:07 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:47.201 21:30:07 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:47.201 21:30:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.201 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.460 21:30:07 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:47.460 21:30:07 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:47.460 21:30:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:47.460 MallocBdevForConfigChangeCheck 00:07:47.460 21:30:07 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:47.460 21:30:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.460 21:30:07 -- common/autotest_common.sh@10 -- # set +x 00:07:47.719 21:30:07 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:47.719 21:30:07 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:47.978 INFO: shutting down applications... 00:07:47.978 21:30:08 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:47.978 21:30:08 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:47.978 21:30:08 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:47.978 21:30:08 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:47.978 21:30:08 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:48.237 [2024-12-06 21:30:08.510720] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:48.237 Calling clear_vhost_scsi_subsystem 00:07:48.237 Calling clear_iscsi_subsystem 00:07:48.237 Calling clear_vhost_blk_subsystem 00:07:48.237 Calling clear_ublk_subsystem 00:07:48.237 Calling clear_nbd_subsystem 00:07:48.237 Calling clear_nvmf_subsystem 00:07:48.237 Calling clear_bdev_subsystem 00:07:48.237 Calling clear_accel_subsystem 00:07:48.237 Calling clear_iobuf_subsystem 00:07:48.237 Calling clear_sock_subsystem 00:07:48.237 Calling clear_vmd_subsystem 00:07:48.237 Calling clear_scheduler_subsystem 00:07:48.237 21:30:08 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:48.237 21:30:08 -- json_config/json_config.sh@396 -- # count=100 00:07:48.237 21:30:08 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:48.237 21:30:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:48.237 21:30:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:48.237 21:30:08 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:48.806 21:30:09 -- json_config/json_config.sh@398 -- # break 00:07:48.806 21:30:09 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:48.806 21:30:09 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:48.806 21:30:09 -- json_config/json_config.sh@120 -- # local app=target 00:07:48.806 21:30:09 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:48.806 21:30:09 -- json_config/json_config.sh@124 -- # [[ -n 60709 ]] 00:07:48.806 21:30:09 -- json_config/json_config.sh@127 -- # kill -SIGINT 60709 00:07:48.806 21:30:09 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:48.806 21:30:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:48.806 21:30:09 -- json_config/json_config.sh@130 -- # kill -0 60709 00:07:48.806 21:30:09 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:49.375 21:30:09 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:49.375 21:30:09 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:49.375 21:30:09 -- json_config/json_config.sh@130 -- # kill -0 60709 00:07:49.375 21:30:09 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:49.634 21:30:10 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:49.634 21:30:10 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:49.634 21:30:10 -- json_config/json_config.sh@130 -- # kill -0 60709 00:07:49.634 21:30:10 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:49.634 21:30:10 -- json_config/json_config.sh@132 -- # break 00:07:49.634 21:30:10 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:49.634 SPDK target shutdown done 00:07:49.634 21:30:10 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:49.634 INFO: relaunching applications... 00:07:49.634 21:30:10 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:49.634 21:30:10 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.634 21:30:10 -- json_config/json_config.sh@98 -- # local app=target 00:07:49.634 21:30:10 -- json_config/json_config.sh@99 -- # shift 00:07:49.634 21:30:10 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:49.634 21:30:10 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:49.634 21:30:10 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:49.634 21:30:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:49.634 21:30:10 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:49.634 Waiting for target to run... 00:07:49.634 21:30:10 -- json_config/json_config.sh@111 -- # app_pid[$app]=60955 00:07:49.634 21:30:10 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:49.634 21:30:10 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:49.634 21:30:10 -- json_config/json_config.sh@114 -- # waitforlisten 60955 /var/tmp/spdk_tgt.sock 00:07:49.634 21:30:10 -- common/autotest_common.sh@829 -- # '[' -z 60955 ']' 00:07:49.634 21:30:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:49.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:49.634 21:30:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.634 21:30:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:49.634 21:30:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.634 21:30:10 -- common/autotest_common.sh@10 -- # set +x 00:07:49.897 [2024-12-06 21:30:10.171508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:49.897 [2024-12-06 21:30:10.171962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60955 ] 00:07:50.154 [2024-12-06 21:30:10.509520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.412 [2024-12-06 21:30:10.711980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:50.412 [2024-12-06 21:30:10.712230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.978 [2024-12-06 21:30:11.357270] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:50.978 [2024-12-06 21:30:11.357368] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:50.978 [2024-12-06 21:30:11.365239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:50.978 [2024-12-06 21:30:11.365491] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:50.978 [2024-12-06 21:30:11.373259] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.978 [2024-12-06 21:30:11.373320] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:50.978 [2024-12-06 21:30:11.373337] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:50.978 [2024-12-06 21:30:11.466217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:50.978 [2024-12-06 21:30:11.466293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.978 [2024-12-06 21:30:11.466317] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:07:50.978 [2024-12-06 21:30:11.466330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.978 [2024-12-06 21:30:11.466879] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.978 [2024-12-06 21:30:11.466933] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:51.561 00:07:51.561 INFO: Checking if target configuration is the same... 00:07:51.561 21:30:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.561 21:30:11 -- common/autotest_common.sh@862 -- # return 0 00:07:51.561 21:30:11 -- json_config/json_config.sh@115 -- # echo '' 00:07:51.561 21:30:11 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:51.561 21:30:11 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:51.561 21:30:11 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:51.561 21:30:11 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:51.561 21:30:11 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:51.561 + '[' 2 -ne 2 ']' 00:07:51.561 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:51.561 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:51.561 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:51.561 +++ basename /dev/fd/62 00:07:51.561 ++ mktemp /tmp/62.XXX 00:07:51.561 + tmp_file_1=/tmp/62.Za2 00:07:51.561 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:51.561 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:51.561 + tmp_file_2=/tmp/spdk_tgt_config.json.3om 00:07:51.561 + ret=0 00:07:51.561 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:51.830 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:51.830 + diff -u /tmp/62.Za2 /tmp/spdk_tgt_config.json.3om 00:07:51.830 + echo 'INFO: JSON config files are the same' 00:07:51.830 INFO: JSON config files are the same 00:07:51.830 + rm /tmp/62.Za2 /tmp/spdk_tgt_config.json.3om 00:07:51.830 + exit 0 00:07:51.830 INFO: changing configuration and checking if this can be detected... 00:07:51.830 21:30:12 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:51.830 21:30:12 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:51.830 21:30:12 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:51.830 21:30:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:52.089 21:30:12 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:52.090 21:30:12 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:52.090 21:30:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:52.090 + '[' 2 -ne 2 ']' 00:07:52.090 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:52.090 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:52.090 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:52.090 +++ basename /dev/fd/62 00:07:52.090 ++ mktemp /tmp/62.XXX 00:07:52.090 + tmp_file_1=/tmp/62.fCr 00:07:52.090 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:52.090 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:52.090 + tmp_file_2=/tmp/spdk_tgt_config.json.pKZ 00:07:52.090 + ret=0 00:07:52.090 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:52.658 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:52.658 + diff -u /tmp/62.fCr /tmp/spdk_tgt_config.json.pKZ 00:07:52.658 + ret=1 00:07:52.658 + echo '=== Start of file: /tmp/62.fCr ===' 00:07:52.658 + cat /tmp/62.fCr 00:07:52.658 + echo '=== End of file: /tmp/62.fCr ===' 00:07:52.658 + echo '' 00:07:52.658 + echo '=== Start of file: /tmp/spdk_tgt_config.json.pKZ ===' 00:07:52.658 + cat /tmp/spdk_tgt_config.json.pKZ 00:07:52.658 + echo '=== End of file: /tmp/spdk_tgt_config.json.pKZ ===' 00:07:52.658 + echo '' 00:07:52.658 + rm /tmp/62.fCr /tmp/spdk_tgt_config.json.pKZ 00:07:52.658 + exit 1 00:07:52.658 INFO: configuration change detected. 00:07:52.658 21:30:12 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:52.658 21:30:12 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:52.658 21:30:12 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:52.658 21:30:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.658 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 21:30:12 -- json_config/json_config.sh@360 -- # local ret=0 00:07:52.658 21:30:12 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:52.658 21:30:12 -- json_config/json_config.sh@370 -- # [[ -n 60955 ]] 00:07:52.658 21:30:12 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:52.658 21:30:12 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:52.658 21:30:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:52.658 21:30:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.658 21:30:12 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:52.658 21:30:12 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:52.658 21:30:12 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:52.917 21:30:13 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:52.917 21:30:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:53.176 21:30:13 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:53.176 21:30:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:53.435 21:30:13 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:53.435 21:30:13 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:53.435 21:30:13 -- json_config/json_config.sh@246 -- # uname -s 00:07:53.435 21:30:13 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:53.435 21:30:13 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:53.435 21:30:13 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:53.435 21:30:13 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:53.435 21:30:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:53.435 21:30:13 -- common/autotest_common.sh@10 -- # set +x 00:07:53.694 21:30:13 -- json_config/json_config.sh@376 -- # killprocess 60955 00:07:53.694 21:30:13 -- common/autotest_common.sh@936 -- # '[' -z 60955 ']' 00:07:53.694 21:30:13 -- common/autotest_common.sh@940 -- # kill -0 60955 00:07:53.694 21:30:13 -- common/autotest_common.sh@941 -- # uname 00:07:53.694 21:30:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:53.694 21:30:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60955 00:07:53.694 killing process with pid 60955 00:07:53.694 21:30:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:53.694 21:30:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:53.694 21:30:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60955' 00:07:53.694 21:30:13 -- common/autotest_common.sh@955 -- # kill 60955 00:07:53.694 21:30:13 -- common/autotest_common.sh@960 -- # wait 60955 00:07:54.632 21:30:14 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:54.632 21:30:14 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:54.632 21:30:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.632 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 INFO: Success 00:07:54.632 21:30:14 -- json_config/json_config.sh@381 -- # return 0 00:07:54.632 21:30:14 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:54.632 00:07:54.632 real 0m13.416s 00:07:54.632 user 0m19.374s 00:07:54.632 sys 0m2.314s 00:07:54.632 21:30:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:54.632 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 ************************************ 00:07:54.632 END TEST json_config 00:07:54.632 ************************************ 00:07:54.632 21:30:14 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:54.632 21:30:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:54.632 21:30:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.632 21:30:14 -- common/autotest_common.sh@10 -- # set +x 00:07:54.632 ************************************ 00:07:54.632 START TEST json_config_extra_key 00:07:54.632 ************************************ 00:07:54.632 21:30:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:54.632 21:30:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:54.632 21:30:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:54.632 21:30:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:54.632 21:30:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:54.632 21:30:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:54.632 21:30:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:54.632 21:30:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:54.632 21:30:15 -- scripts/common.sh@335 -- # IFS=.-: 00:07:54.632 21:30:15 -- scripts/common.sh@335 -- # read -ra ver1 00:07:54.632 21:30:15 -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.632 21:30:15 -- scripts/common.sh@336 -- # read -ra ver2 00:07:54.632 21:30:15 -- scripts/common.sh@337 -- # local 'op=<' 00:07:54.632 21:30:15 -- scripts/common.sh@339 -- # ver1_l=2 00:07:54.632 21:30:15 -- scripts/common.sh@340 -- # ver2_l=1 00:07:54.632 21:30:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:54.632 21:30:15 -- scripts/common.sh@343 -- # case "$op" in 00:07:54.632 21:30:15 -- scripts/common.sh@344 -- # : 1 00:07:54.632 21:30:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:54.632 21:30:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.632 21:30:15 -- scripts/common.sh@364 -- # decimal 1 00:07:54.632 21:30:15 -- scripts/common.sh@352 -- # local d=1 00:07:54.632 21:30:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.632 21:30:15 -- scripts/common.sh@354 -- # echo 1 00:07:54.632 21:30:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:54.632 21:30:15 -- scripts/common.sh@365 -- # decimal 2 00:07:54.632 21:30:15 -- scripts/common.sh@352 -- # local d=2 00:07:54.632 21:30:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.632 21:30:15 -- scripts/common.sh@354 -- # echo 2 00:07:54.632 21:30:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:54.632 21:30:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:54.632 21:30:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:54.632 21:30:15 -- scripts/common.sh@367 -- # return 0 00:07:54.632 21:30:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.632 21:30:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 21:30:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 21:30:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.632 21:30:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:54.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.632 --rc genhtml_branch_coverage=1 00:07:54.632 --rc genhtml_function_coverage=1 00:07:54.632 --rc genhtml_legend=1 00:07:54.632 --rc geninfo_all_blocks=1 00:07:54.632 --rc geninfo_unexecuted_blocks=1 00:07:54.632 00:07:54.632 ' 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.633 21:30:15 -- nvmf/common.sh@7 -- # uname -s 00:07:54.633 21:30:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.633 21:30:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.633 21:30:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.633 21:30:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.633 21:30:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.633 21:30:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.633 21:30:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.633 21:30:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.633 21:30:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.633 21:30:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.633 21:30:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:07:54.633 21:30:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=8e1d9cdb-6b0f-4e53-bec5-c2866d201ab4 00:07:54.633 21:30:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.633 21:30:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.633 21:30:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:54.633 21:30:15 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.633 21:30:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.633 21:30:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.633 21:30:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.633 21:30:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.633 21:30:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.633 21:30:15 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.633 21:30:15 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.633 21:30:15 -- paths/export.sh@6 -- # export PATH 00:07:54.633 21:30:15 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:07:54.633 21:30:15 -- nvmf/common.sh@46 -- # : 0 00:07:54.633 21:30:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:54.633 21:30:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:54.633 21:30:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:54.633 21:30:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.633 21:30:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.633 21:30:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:54.633 21:30:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:54.633 21:30:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:54.633 21:30:15 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:54.891 INFO: launching applications... 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=61137 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:54.891 Waiting for target to run... 00:07:54.891 21:30:15 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 61137 /var/tmp/spdk_tgt.sock 00:07:54.891 21:30:15 -- common/autotest_common.sh@829 -- # '[' -z 61137 ']' 00:07:54.891 21:30:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:54.891 21:30:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.891 21:30:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:54.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:54.891 21:30:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.891 21:30:15 -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 [2024-12-06 21:30:15.193555] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:54.891 [2024-12-06 21:30:15.193911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61137 ] 00:07:55.150 [2024-12-06 21:30:15.523743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.408 [2024-12-06 21:30:15.682413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.408 [2024-12-06 21:30:15.682983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.784 00:07:56.784 INFO: shutting down applications... 00:07:56.784 21:30:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.784 21:30:16 -- common/autotest_common.sh@862 -- # return 0 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 61137 ]] 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 61137 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:56.784 21:30:16 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:57.043 21:30:17 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:57.043 21:30:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:57.043 21:30:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:57.043 21:30:17 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:57.611 21:30:17 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:57.612 21:30:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:57.612 21:30:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:57.612 21:30:17 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:58.180 21:30:18 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:58.180 21:30:18 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:58.180 21:30:18 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:58.180 21:30:18 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:58.440 21:30:18 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:58.440 21:30:18 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:58.440 21:30:18 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:58.440 21:30:18 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@50 -- # kill -0 61137 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:59.008 SPDK target shutdown done 00:07:59.008 Success 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:59.008 21:30:19 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:59.008 ************************************ 00:07:59.008 END TEST json_config_extra_key 00:07:59.008 ************************************ 00:07:59.008 00:07:59.008 real 0m4.450s 00:07:59.008 user 0m4.366s 00:07:59.008 sys 0m0.585s 00:07:59.008 21:30:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:59.008 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:07:59.008 21:30:19 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:59.008 21:30:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:59.008 21:30:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:59.008 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:07:59.008 ************************************ 00:07:59.008 START TEST alias_rpc 00:07:59.008 ************************************ 00:07:59.009 21:30:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:59.268 * Looking for test storage... 00:07:59.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:59.268 21:30:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:59.268 21:30:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:59.268 21:30:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:59.268 21:30:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:59.268 21:30:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:59.268 21:30:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:59.268 21:30:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:59.268 21:30:19 -- scripts/common.sh@335 -- # IFS=.-: 00:07:59.268 21:30:19 -- scripts/common.sh@335 -- # read -ra ver1 00:07:59.268 21:30:19 -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.268 21:30:19 -- scripts/common.sh@336 -- # read -ra ver2 00:07:59.268 21:30:19 -- scripts/common.sh@337 -- # local 'op=<' 00:07:59.268 21:30:19 -- scripts/common.sh@339 -- # ver1_l=2 00:07:59.268 21:30:19 -- scripts/common.sh@340 -- # ver2_l=1 00:07:59.268 21:30:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:59.268 21:30:19 -- scripts/common.sh@343 -- # case "$op" in 00:07:59.268 21:30:19 -- scripts/common.sh@344 -- # : 1 00:07:59.268 21:30:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:59.268 21:30:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.268 21:30:19 -- scripts/common.sh@364 -- # decimal 1 00:07:59.268 21:30:19 -- scripts/common.sh@352 -- # local d=1 00:07:59.268 21:30:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.268 21:30:19 -- scripts/common.sh@354 -- # echo 1 00:07:59.268 21:30:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:59.268 21:30:19 -- scripts/common.sh@365 -- # decimal 2 00:07:59.268 21:30:19 -- scripts/common.sh@352 -- # local d=2 00:07:59.268 21:30:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.268 21:30:19 -- scripts/common.sh@354 -- # echo 2 00:07:59.268 21:30:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:59.268 21:30:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:59.268 21:30:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:59.268 21:30:19 -- scripts/common.sh@367 -- # return 0 00:07:59.268 21:30:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.268 21:30:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:59.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.268 --rc genhtml_branch_coverage=1 00:07:59.268 --rc genhtml_function_coverage=1 00:07:59.268 --rc genhtml_legend=1 00:07:59.268 --rc geninfo_all_blocks=1 00:07:59.268 --rc geninfo_unexecuted_blocks=1 00:07:59.268 00:07:59.268 ' 00:07:59.268 21:30:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:59.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.268 --rc genhtml_branch_coverage=1 00:07:59.268 --rc genhtml_function_coverage=1 00:07:59.268 --rc genhtml_legend=1 00:07:59.268 --rc geninfo_all_blocks=1 00:07:59.268 --rc geninfo_unexecuted_blocks=1 00:07:59.268 00:07:59.268 ' 00:07:59.268 21:30:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:59.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.268 --rc genhtml_branch_coverage=1 00:07:59.268 --rc genhtml_function_coverage=1 00:07:59.268 --rc genhtml_legend=1 00:07:59.268 --rc geninfo_all_blocks=1 00:07:59.268 --rc geninfo_unexecuted_blocks=1 00:07:59.268 00:07:59.268 ' 00:07:59.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.268 21:30:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:59.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.268 --rc genhtml_branch_coverage=1 00:07:59.268 --rc genhtml_function_coverage=1 00:07:59.268 --rc genhtml_legend=1 00:07:59.268 --rc geninfo_all_blocks=1 00:07:59.268 --rc geninfo_unexecuted_blocks=1 00:07:59.268 00:07:59.268 ' 00:07:59.268 21:30:19 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:59.268 21:30:19 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61243 00:07:59.268 21:30:19 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61243 00:07:59.268 21:30:19 -- common/autotest_common.sh@829 -- # '[' -z 61243 ']' 00:07:59.268 21:30:19 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:59.268 21:30:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.268 21:30:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.268 21:30:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.268 21:30:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.268 21:30:19 -- common/autotest_common.sh@10 -- # set +x 00:07:59.268 [2024-12-06 21:30:19.703937] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:59.268 [2024-12-06 21:30:19.704883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61243 ] 00:07:59.527 [2024-12-06 21:30:19.871208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.786 [2024-12-06 21:30:20.056344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.786 [2024-12-06 21:30:20.056833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.164 21:30:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:01.164 21:30:21 -- common/autotest_common.sh@862 -- # return 0 00:08:01.164 21:30:21 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:01.164 21:30:21 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61243 00:08:01.164 21:30:21 -- common/autotest_common.sh@936 -- # '[' -z 61243 ']' 00:08:01.164 21:30:21 -- common/autotest_common.sh@940 -- # kill -0 61243 00:08:01.164 21:30:21 -- common/autotest_common.sh@941 -- # uname 00:08:01.164 21:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:01.164 21:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61243 00:08:01.164 killing process with pid 61243 00:08:01.164 21:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:01.164 21:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:01.164 21:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61243' 00:08:01.164 21:30:21 -- common/autotest_common.sh@955 -- # kill 61243 00:08:01.164 21:30:21 -- common/autotest_common.sh@960 -- # wait 61243 00:08:03.697 ************************************ 00:08:03.697 END TEST alias_rpc 00:08:03.697 ************************************ 00:08:03.697 00:08:03.697 real 0m4.214s 00:08:03.697 user 0m4.523s 00:08:03.697 sys 0m0.541s 00:08:03.697 21:30:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:03.697 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:08:03.697 21:30:23 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:08:03.697 21:30:23 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:03.697 21:30:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:03.697 21:30:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.697 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:08:03.697 ************************************ 00:08:03.697 START TEST spdkcli_tcp 00:08:03.697 ************************************ 00:08:03.697 21:30:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:03.697 * Looking for test storage... 00:08:03.697 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:03.697 21:30:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:03.697 21:30:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:03.697 21:30:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:03.697 21:30:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:03.697 21:30:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:03.697 21:30:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:03.697 21:30:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:03.697 21:30:23 -- scripts/common.sh@335 -- # IFS=.-: 00:08:03.697 21:30:23 -- scripts/common.sh@335 -- # read -ra ver1 00:08:03.697 21:30:23 -- scripts/common.sh@336 -- # IFS=.-: 00:08:03.697 21:30:23 -- scripts/common.sh@336 -- # read -ra ver2 00:08:03.697 21:30:23 -- scripts/common.sh@337 -- # local 'op=<' 00:08:03.697 21:30:23 -- scripts/common.sh@339 -- # ver1_l=2 00:08:03.697 21:30:23 -- scripts/common.sh@340 -- # ver2_l=1 00:08:03.697 21:30:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:03.697 21:30:23 -- scripts/common.sh@343 -- # case "$op" in 00:08:03.697 21:30:23 -- scripts/common.sh@344 -- # : 1 00:08:03.697 21:30:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:03.697 21:30:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:03.697 21:30:23 -- scripts/common.sh@364 -- # decimal 1 00:08:03.697 21:30:23 -- scripts/common.sh@352 -- # local d=1 00:08:03.697 21:30:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:03.697 21:30:23 -- scripts/common.sh@354 -- # echo 1 00:08:03.697 21:30:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:03.698 21:30:23 -- scripts/common.sh@365 -- # decimal 2 00:08:03.698 21:30:23 -- scripts/common.sh@352 -- # local d=2 00:08:03.698 21:30:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:03.698 21:30:23 -- scripts/common.sh@354 -- # echo 2 00:08:03.698 21:30:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:03.698 21:30:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:03.698 21:30:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:03.698 21:30:23 -- scripts/common.sh@367 -- # return 0 00:08:03.698 21:30:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:03.698 21:30:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:03.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.698 --rc genhtml_branch_coverage=1 00:08:03.698 --rc genhtml_function_coverage=1 00:08:03.698 --rc genhtml_legend=1 00:08:03.698 --rc geninfo_all_blocks=1 00:08:03.698 --rc geninfo_unexecuted_blocks=1 00:08:03.698 00:08:03.698 ' 00:08:03.698 21:30:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:03.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.698 --rc genhtml_branch_coverage=1 00:08:03.698 --rc genhtml_function_coverage=1 00:08:03.698 --rc genhtml_legend=1 00:08:03.698 --rc geninfo_all_blocks=1 00:08:03.698 --rc geninfo_unexecuted_blocks=1 00:08:03.698 00:08:03.698 ' 00:08:03.698 21:30:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:03.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.698 --rc genhtml_branch_coverage=1 00:08:03.698 --rc genhtml_function_coverage=1 00:08:03.698 --rc genhtml_legend=1 00:08:03.698 --rc geninfo_all_blocks=1 00:08:03.698 --rc geninfo_unexecuted_blocks=1 00:08:03.698 00:08:03.698 ' 00:08:03.698 21:30:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:03.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:03.698 --rc genhtml_branch_coverage=1 00:08:03.698 --rc genhtml_function_coverage=1 00:08:03.698 --rc genhtml_legend=1 00:08:03.698 --rc geninfo_all_blocks=1 00:08:03.698 --rc geninfo_unexecuted_blocks=1 00:08:03.698 00:08:03.698 ' 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:03.698 21:30:23 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:03.698 21:30:23 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:03.698 21:30:23 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.698 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:08:03.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=61351 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@27 -- # waitforlisten 61351 00:08:03.698 21:30:23 -- common/autotest_common.sh@829 -- # '[' -z 61351 ']' 00:08:03.698 21:30:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.698 21:30:23 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:03.698 21:30:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.698 21:30:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.698 21:30:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.698 21:30:23 -- common/autotest_common.sh@10 -- # set +x 00:08:03.698 [2024-12-06 21:30:23.969305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:03.698 [2024-12-06 21:30:23.969521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61351 ] 00:08:03.698 [2024-12-06 21:30:24.136849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.956 [2024-12-06 21:30:24.319278] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:03.956 [2024-12-06 21:30:24.319634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.956 [2024-12-06 21:30:24.319660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.337 21:30:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:05.337 21:30:25 -- common/autotest_common.sh@862 -- # return 0 00:08:05.337 21:30:25 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:05.337 21:30:25 -- spdkcli/tcp.sh@31 -- # socat_pid=61381 00:08:05.337 21:30:25 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:05.595 [ 00:08:05.595 "spdk_get_version", 00:08:05.595 "rpc_get_methods", 00:08:05.595 "trace_get_info", 00:08:05.595 "trace_get_tpoint_group_mask", 00:08:05.595 "trace_disable_tpoint_group", 00:08:05.595 "trace_enable_tpoint_group", 00:08:05.595 "trace_clear_tpoint_mask", 00:08:05.595 "trace_set_tpoint_mask", 00:08:05.595 "framework_get_pci_devices", 00:08:05.595 "framework_get_config", 00:08:05.595 "framework_get_subsystems", 00:08:05.595 "iobuf_get_stats", 00:08:05.595 "iobuf_set_options", 00:08:05.595 "sock_set_default_impl", 00:08:05.595 "sock_impl_set_options", 00:08:05.595 "sock_impl_get_options", 00:08:05.595 "vmd_rescan", 00:08:05.595 "vmd_remove_device", 00:08:05.595 "vmd_enable", 00:08:05.595 "accel_get_stats", 00:08:05.595 "accel_set_options", 00:08:05.595 "accel_set_driver", 00:08:05.595 "accel_crypto_key_destroy", 00:08:05.595 "accel_crypto_keys_get", 00:08:05.595 "accel_crypto_key_create", 00:08:05.595 "accel_assign_opc", 00:08:05.595 "accel_get_module_info", 00:08:05.595 "accel_get_opc_assignments", 00:08:05.595 "notify_get_notifications", 00:08:05.595 "notify_get_types", 00:08:05.595 "bdev_get_histogram", 00:08:05.595 "bdev_enable_histogram", 00:08:05.595 "bdev_set_qos_limit", 00:08:05.595 "bdev_set_qd_sampling_period", 00:08:05.595 "bdev_get_bdevs", 00:08:05.596 "bdev_reset_iostat", 00:08:05.596 "bdev_get_iostat", 00:08:05.596 "bdev_examine", 00:08:05.596 "bdev_wait_for_examine", 00:08:05.596 "bdev_set_options", 00:08:05.596 "scsi_get_devices", 00:08:05.596 "thread_set_cpumask", 00:08:05.596 "framework_get_scheduler", 00:08:05.596 "framework_set_scheduler", 00:08:05.596 "framework_get_reactors", 00:08:05.596 "thread_get_io_channels", 00:08:05.596 "thread_get_pollers", 00:08:05.596 "thread_get_stats", 00:08:05.596 "framework_monitor_context_switch", 00:08:05.596 "spdk_kill_instance", 00:08:05.596 "log_enable_timestamps", 00:08:05.596 "log_get_flags", 00:08:05.596 "log_clear_flag", 00:08:05.596 "log_set_flag", 00:08:05.596 "log_get_level", 00:08:05.596 "log_set_level", 00:08:05.596 "log_get_print_level", 00:08:05.596 "log_set_print_level", 00:08:05.596 "framework_enable_cpumask_locks", 00:08:05.596 "framework_disable_cpumask_locks", 00:08:05.596 "framework_wait_init", 00:08:05.596 "framework_start_init", 00:08:05.596 "virtio_blk_create_transport", 00:08:05.596 "virtio_blk_get_transports", 00:08:05.596 "vhost_controller_set_coalescing", 00:08:05.596 "vhost_get_controllers", 00:08:05.596 "vhost_delete_controller", 00:08:05.596 "vhost_create_blk_controller", 00:08:05.596 "vhost_scsi_controller_remove_target", 00:08:05.596 "vhost_scsi_controller_add_target", 00:08:05.596 "vhost_start_scsi_controller", 00:08:05.596 "vhost_create_scsi_controller", 00:08:05.596 "ublk_recover_disk", 00:08:05.596 "ublk_get_disks", 00:08:05.596 "ublk_stop_disk", 00:08:05.596 "ublk_start_disk", 00:08:05.596 "ublk_destroy_target", 00:08:05.596 "ublk_create_target", 00:08:05.596 "nbd_get_disks", 00:08:05.596 "nbd_stop_disk", 00:08:05.596 "nbd_start_disk", 00:08:05.596 "env_dpdk_get_mem_stats", 00:08:05.596 "nvmf_subsystem_get_listeners", 00:08:05.596 "nvmf_subsystem_get_qpairs", 00:08:05.596 "nvmf_subsystem_get_controllers", 00:08:05.596 "nvmf_get_stats", 00:08:05.596 "nvmf_get_transports", 00:08:05.596 "nvmf_create_transport", 00:08:05.596 "nvmf_get_targets", 00:08:05.596 "nvmf_delete_target", 00:08:05.596 "nvmf_create_target", 00:08:05.596 "nvmf_subsystem_allow_any_host", 00:08:05.596 "nvmf_subsystem_remove_host", 00:08:05.596 "nvmf_subsystem_add_host", 00:08:05.596 "nvmf_subsystem_remove_ns", 00:08:05.596 "nvmf_subsystem_add_ns", 00:08:05.596 "nvmf_subsystem_listener_set_ana_state", 00:08:05.596 "nvmf_discovery_get_referrals", 00:08:05.596 "nvmf_discovery_remove_referral", 00:08:05.596 "nvmf_discovery_add_referral", 00:08:05.596 "nvmf_subsystem_remove_listener", 00:08:05.596 "nvmf_subsystem_add_listener", 00:08:05.596 "nvmf_delete_subsystem", 00:08:05.596 "nvmf_create_subsystem", 00:08:05.596 "nvmf_get_subsystems", 00:08:05.596 "nvmf_set_crdt", 00:08:05.596 "nvmf_set_config", 00:08:05.596 "nvmf_set_max_subsystems", 00:08:05.596 "iscsi_set_options", 00:08:05.596 "iscsi_get_auth_groups", 00:08:05.596 "iscsi_auth_group_remove_secret", 00:08:05.596 "iscsi_auth_group_add_secret", 00:08:05.596 "iscsi_delete_auth_group", 00:08:05.596 "iscsi_create_auth_group", 00:08:05.596 "iscsi_set_discovery_auth", 00:08:05.596 "iscsi_get_options", 00:08:05.596 "iscsi_target_node_request_logout", 00:08:05.596 "iscsi_target_node_set_redirect", 00:08:05.596 "iscsi_target_node_set_auth", 00:08:05.596 "iscsi_target_node_add_lun", 00:08:05.596 "iscsi_get_connections", 00:08:05.596 "iscsi_portal_group_set_auth", 00:08:05.596 "iscsi_start_portal_group", 00:08:05.596 "iscsi_delete_portal_group", 00:08:05.596 "iscsi_create_portal_group", 00:08:05.596 "iscsi_get_portal_groups", 00:08:05.596 "iscsi_delete_target_node", 00:08:05.596 "iscsi_target_node_remove_pg_ig_maps", 00:08:05.596 "iscsi_target_node_add_pg_ig_maps", 00:08:05.596 "iscsi_create_target_node", 00:08:05.596 "iscsi_get_target_nodes", 00:08:05.596 "iscsi_delete_initiator_group", 00:08:05.596 "iscsi_initiator_group_remove_initiators", 00:08:05.596 "iscsi_initiator_group_add_initiators", 00:08:05.596 "iscsi_create_initiator_group", 00:08:05.596 "iscsi_get_initiator_groups", 00:08:05.596 "iaa_scan_accel_module", 00:08:05.596 "dsa_scan_accel_module", 00:08:05.596 "ioat_scan_accel_module", 00:08:05.596 "accel_error_inject_error", 00:08:05.596 "bdev_iscsi_delete", 00:08:05.596 "bdev_iscsi_create", 00:08:05.596 "bdev_iscsi_set_options", 00:08:05.596 "bdev_virtio_attach_controller", 00:08:05.596 "bdev_virtio_scsi_get_devices", 00:08:05.596 "bdev_virtio_detach_controller", 00:08:05.596 "bdev_virtio_blk_set_hotplug", 00:08:05.596 "bdev_ftl_set_property", 00:08:05.596 "bdev_ftl_get_properties", 00:08:05.596 "bdev_ftl_get_stats", 00:08:05.596 "bdev_ftl_unmap", 00:08:05.596 "bdev_ftl_unload", 00:08:05.596 "bdev_ftl_delete", 00:08:05.596 "bdev_ftl_load", 00:08:05.596 "bdev_ftl_create", 00:08:05.596 "bdev_aio_delete", 00:08:05.596 "bdev_aio_rescan", 00:08:05.596 "bdev_aio_create", 00:08:05.596 "blobfs_create", 00:08:05.596 "blobfs_detect", 00:08:05.596 "blobfs_set_cache_size", 00:08:05.596 "bdev_zone_block_delete", 00:08:05.596 "bdev_zone_block_create", 00:08:05.596 "bdev_delay_delete", 00:08:05.596 "bdev_delay_create", 00:08:05.596 "bdev_delay_update_latency", 00:08:05.596 "bdev_split_delete", 00:08:05.596 "bdev_split_create", 00:08:05.596 "bdev_error_inject_error", 00:08:05.596 "bdev_error_delete", 00:08:05.596 "bdev_error_create", 00:08:05.596 "bdev_raid_set_options", 00:08:05.596 "bdev_raid_remove_base_bdev", 00:08:05.596 "bdev_raid_add_base_bdev", 00:08:05.596 "bdev_raid_delete", 00:08:05.596 "bdev_raid_create", 00:08:05.596 "bdev_raid_get_bdevs", 00:08:05.596 "bdev_lvol_grow_lvstore", 00:08:05.596 "bdev_lvol_get_lvols", 00:08:05.596 "bdev_lvol_get_lvstores", 00:08:05.596 "bdev_lvol_delete", 00:08:05.596 "bdev_lvol_set_read_only", 00:08:05.596 "bdev_lvol_resize", 00:08:05.596 "bdev_lvol_decouple_parent", 00:08:05.596 "bdev_lvol_inflate", 00:08:05.596 "bdev_lvol_rename", 00:08:05.596 "bdev_lvol_clone_bdev", 00:08:05.596 "bdev_lvol_clone", 00:08:05.596 "bdev_lvol_snapshot", 00:08:05.596 "bdev_lvol_create", 00:08:05.596 "bdev_lvol_delete_lvstore", 00:08:05.596 "bdev_lvol_rename_lvstore", 00:08:05.596 "bdev_lvol_create_lvstore", 00:08:05.596 "bdev_passthru_delete", 00:08:05.596 "bdev_passthru_create", 00:08:05.596 "bdev_nvme_cuse_unregister", 00:08:05.596 "bdev_nvme_cuse_register", 00:08:05.596 "bdev_opal_new_user", 00:08:05.596 "bdev_opal_set_lock_state", 00:08:05.596 "bdev_opal_delete", 00:08:05.596 "bdev_opal_get_info", 00:08:05.596 "bdev_opal_create", 00:08:05.596 "bdev_nvme_opal_revert", 00:08:05.596 "bdev_nvme_opal_init", 00:08:05.596 "bdev_nvme_send_cmd", 00:08:05.596 "bdev_nvme_get_path_iostat", 00:08:05.596 "bdev_nvme_get_mdns_discovery_info", 00:08:05.596 "bdev_nvme_stop_mdns_discovery", 00:08:05.596 "bdev_nvme_start_mdns_discovery", 00:08:05.596 "bdev_nvme_set_multipath_policy", 00:08:05.596 "bdev_nvme_set_preferred_path", 00:08:05.596 "bdev_nvme_get_io_paths", 00:08:05.596 "bdev_nvme_remove_error_injection", 00:08:05.596 "bdev_nvme_add_error_injection", 00:08:05.596 "bdev_nvme_get_discovery_info", 00:08:05.596 "bdev_nvme_stop_discovery", 00:08:05.596 "bdev_nvme_start_discovery", 00:08:05.596 "bdev_nvme_get_controller_health_info", 00:08:05.596 "bdev_nvme_disable_controller", 00:08:05.596 "bdev_nvme_enable_controller", 00:08:05.596 "bdev_nvme_reset_controller", 00:08:05.596 "bdev_nvme_get_transport_statistics", 00:08:05.596 "bdev_nvme_apply_firmware", 00:08:05.596 "bdev_nvme_detach_controller", 00:08:05.596 "bdev_nvme_get_controllers", 00:08:05.596 "bdev_nvme_attach_controller", 00:08:05.596 "bdev_nvme_set_hotplug", 00:08:05.596 "bdev_nvme_set_options", 00:08:05.596 "bdev_null_resize", 00:08:05.596 "bdev_null_delete", 00:08:05.596 "bdev_null_create", 00:08:05.596 "bdev_malloc_delete", 00:08:05.596 "bdev_malloc_create" 00:08:05.596 ] 00:08:05.596 21:30:25 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:05.596 21:30:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:05.596 21:30:25 -- common/autotest_common.sh@10 -- # set +x 00:08:05.596 21:30:25 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:05.596 21:30:25 -- spdkcli/tcp.sh@38 -- # killprocess 61351 00:08:05.596 21:30:25 -- common/autotest_common.sh@936 -- # '[' -z 61351 ']' 00:08:05.596 21:30:25 -- common/autotest_common.sh@940 -- # kill -0 61351 00:08:05.596 21:30:25 -- common/autotest_common.sh@941 -- # uname 00:08:05.596 21:30:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:05.596 21:30:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61351 00:08:05.596 killing process with pid 61351 00:08:05.596 21:30:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:05.596 21:30:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:05.596 21:30:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61351' 00:08:05.596 21:30:25 -- common/autotest_common.sh@955 -- # kill 61351 00:08:05.596 21:30:25 -- common/autotest_common.sh@960 -- # wait 61351 00:08:08.130 ************************************ 00:08:08.130 END TEST spdkcli_tcp 00:08:08.130 ************************************ 00:08:08.130 00:08:08.130 real 0m4.325s 00:08:08.130 user 0m8.019s 00:08:08.130 sys 0m0.564s 00:08:08.130 21:30:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:08.130 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.130 21:30:28 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.130 21:30:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.130 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.130 ************************************ 00:08:08.130 START TEST dpdk_mem_utility 00:08:08.130 ************************************ 00:08:08.130 21:30:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:08.130 * Looking for test storage... 00:08:08.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:08.130 21:30:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:08.130 21:30:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:08.130 21:30:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:08.130 21:30:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:08.130 21:30:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:08.130 21:30:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:08.130 21:30:28 -- scripts/common.sh@335 -- # IFS=.-: 00:08:08.130 21:30:28 -- scripts/common.sh@335 -- # read -ra ver1 00:08:08.130 21:30:28 -- scripts/common.sh@336 -- # IFS=.-: 00:08:08.130 21:30:28 -- scripts/common.sh@336 -- # read -ra ver2 00:08:08.130 21:30:28 -- scripts/common.sh@337 -- # local 'op=<' 00:08:08.130 21:30:28 -- scripts/common.sh@339 -- # ver1_l=2 00:08:08.130 21:30:28 -- scripts/common.sh@340 -- # ver2_l=1 00:08:08.130 21:30:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:08.130 21:30:28 -- scripts/common.sh@343 -- # case "$op" in 00:08:08.130 21:30:28 -- scripts/common.sh@344 -- # : 1 00:08:08.130 21:30:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:08.130 21:30:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:08.130 21:30:28 -- scripts/common.sh@364 -- # decimal 1 00:08:08.130 21:30:28 -- scripts/common.sh@352 -- # local d=1 00:08:08.130 21:30:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:08.130 21:30:28 -- scripts/common.sh@354 -- # echo 1 00:08:08.130 21:30:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:08.130 21:30:28 -- scripts/common.sh@365 -- # decimal 2 00:08:08.130 21:30:28 -- scripts/common.sh@352 -- # local d=2 00:08:08.130 21:30:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:08.130 21:30:28 -- scripts/common.sh@354 -- # echo 2 00:08:08.130 21:30:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:08.130 21:30:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:08.130 21:30:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:08.130 21:30:28 -- scripts/common.sh@367 -- # return 0 00:08:08.130 21:30:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.130 --rc genhtml_branch_coverage=1 00:08:08.130 --rc genhtml_function_coverage=1 00:08:08.130 --rc genhtml_legend=1 00:08:08.130 --rc geninfo_all_blocks=1 00:08:08.130 --rc geninfo_unexecuted_blocks=1 00:08:08.130 00:08:08.130 ' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.130 --rc genhtml_branch_coverage=1 00:08:08.130 --rc genhtml_function_coverage=1 00:08:08.130 --rc genhtml_legend=1 00:08:08.130 --rc geninfo_all_blocks=1 00:08:08.130 --rc geninfo_unexecuted_blocks=1 00:08:08.130 00:08:08.130 ' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.130 --rc genhtml_branch_coverage=1 00:08:08.130 --rc genhtml_function_coverage=1 00:08:08.130 --rc genhtml_legend=1 00:08:08.130 --rc geninfo_all_blocks=1 00:08:08.130 --rc geninfo_unexecuted_blocks=1 00:08:08.130 00:08:08.130 ' 00:08:08.130 21:30:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:08.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:08.131 --rc genhtml_branch_coverage=1 00:08:08.131 --rc genhtml_function_coverage=1 00:08:08.131 --rc genhtml_legend=1 00:08:08.131 --rc geninfo_all_blocks=1 00:08:08.131 --rc geninfo_unexecuted_blocks=1 00:08:08.131 00:08:08.131 ' 00:08:08.131 21:30:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:08.131 21:30:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61480 00:08:08.131 21:30:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61480 00:08:08.131 21:30:28 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.131 21:30:28 -- common/autotest_common.sh@829 -- # '[' -z 61480 ']' 00:08:08.131 21:30:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.131 21:30:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.131 21:30:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.131 21:30:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.131 21:30:28 -- common/autotest_common.sh@10 -- # set +x 00:08:08.131 [2024-12-06 21:30:28.318908] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:08.131 [2024-12-06 21:30:28.319100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61480 ] 00:08:08.131 [2024-12-06 21:30:28.487533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.389 [2024-12-06 21:30:28.673592] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:08.389 [2024-12-06 21:30:28.673856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.770 21:30:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.770 21:30:29 -- common/autotest_common.sh@862 -- # return 0 00:08:09.770 21:30:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:09.770 21:30:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:09.770 21:30:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.770 21:30:29 -- common/autotest_common.sh@10 -- # set +x 00:08:09.770 { 00:08:09.770 "filename": "/tmp/spdk_mem_dump.txt" 00:08:09.770 } 00:08:09.770 21:30:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.770 21:30:29 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:09.770 DPDK memory size 820.000000 MiB in 1 heap(s) 00:08:09.770 1 heaps totaling size 820.000000 MiB 00:08:09.770 size: 820.000000 MiB heap id: 0 00:08:09.770 end heaps---------- 00:08:09.770 8 mempools totaling size 598.116089 MiB 00:08:09.770 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:09.770 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:09.770 size: 84.521057 MiB name: bdev_io_61480 00:08:09.770 size: 51.011292 MiB name: evtpool_61480 00:08:09.770 size: 50.003479 MiB name: msgpool_61480 00:08:09.770 size: 21.763794 MiB name: PDU_Pool 00:08:09.770 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:09.770 size: 0.026123 MiB name: Session_Pool 00:08:09.770 end mempools------- 00:08:09.770 6 memzones totaling size 4.142822 MiB 00:08:09.770 size: 1.000366 MiB name: RG_ring_0_61480 00:08:09.770 size: 1.000366 MiB name: RG_ring_1_61480 00:08:09.770 size: 1.000366 MiB name: RG_ring_4_61480 00:08:09.770 size: 1.000366 MiB name: RG_ring_5_61480 00:08:09.770 size: 0.125366 MiB name: RG_ring_2_61480 00:08:09.770 size: 0.015991 MiB name: RG_ring_3_61480 00:08:09.770 end memzones------- 00:08:09.770 21:30:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:09.770 heap id: 0 total size: 820.000000 MiB number of busy elements: 303 number of free elements: 18 00:08:09.770 list of free elements. size: 18.450806 MiB 00:08:09.770 element at address: 0x200000400000 with size: 1.999451 MiB 00:08:09.770 element at address: 0x200000800000 with size: 1.996887 MiB 00:08:09.770 element at address: 0x200007000000 with size: 1.995972 MiB 00:08:09.770 element at address: 0x20000b200000 with size: 1.995972 MiB 00:08:09.770 element at address: 0x200019100040 with size: 0.999939 MiB 00:08:09.770 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:09.770 element at address: 0x200019600000 with size: 0.999084 MiB 00:08:09.770 element at address: 0x200003e00000 with size: 0.996094 MiB 00:08:09.770 element at address: 0x200032200000 with size: 0.994324 MiB 00:08:09.770 element at address: 0x200018e00000 with size: 0.959656 MiB 00:08:09.770 element at address: 0x200019900040 with size: 0.936401 MiB 00:08:09.770 element at address: 0x200000200000 with size: 0.829224 MiB 00:08:09.770 element at address: 0x20001b000000 with size: 0.564148 MiB 00:08:09.770 element at address: 0x200019200000 with size: 0.487976 MiB 00:08:09.770 element at address: 0x200019a00000 with size: 0.485413 MiB 00:08:09.770 element at address: 0x200013800000 with size: 0.467896 MiB 00:08:09.770 element at address: 0x200028400000 with size: 0.390442 MiB 00:08:09.770 element at address: 0x200003a00000 with size: 0.351990 MiB 00:08:09.770 list of standard malloc elements. size: 199.284790 MiB 00:08:09.770 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:08:09.770 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:08:09.770 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:08:09.770 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:09.770 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:09.770 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:09.770 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:08:09.770 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:09.770 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:08:09.770 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:08:09.770 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:08:09.770 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:08:09.770 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003aff980 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003affa80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200003eff000 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013877c80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013877d80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013877e80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013877f80 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878080 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878180 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878280 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878380 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878480 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200013878580 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x200019abc680 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:08:09.771 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:08:09.772 element at address: 0x200028463f40 with size: 0.000244 MiB 00:08:09.772 element at address: 0x200028464040 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846af80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b080 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b180 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b280 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b380 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b480 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b580 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b680 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b780 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b880 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846b980 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846be80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c080 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c180 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c280 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c380 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c480 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c580 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c680 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c780 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c880 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846c980 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d080 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d180 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d280 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d380 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d480 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d580 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d680 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d780 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d880 with size: 0.000244 MiB 00:08:09.772 element at address: 0x20002846d980 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846da80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846db80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846de80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846df80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e080 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e180 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e280 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e380 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e480 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e580 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e680 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e780 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e880 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846e980 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f080 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f180 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f280 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f380 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f480 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f580 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f680 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f780 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f880 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846f980 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:08:09.773 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:08:09.773 list of memzone associated elements. size: 602.264404 MiB 00:08:09.773 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:08:09.773 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:09.773 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:08:09.773 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:09.773 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:08:09.773 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61480_0 00:08:09.773 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:08:09.773 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61480_0 00:08:09.773 element at address: 0x200003fff340 with size: 48.003113 MiB 00:08:09.773 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61480_0 00:08:09.773 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:08:09.773 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:09.773 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:08:09.773 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:09.773 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:08:09.773 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61480 00:08:09.773 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:08:09.773 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61480 00:08:09.773 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:09.773 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61480 00:08:09.773 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:09.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:09.773 element at address: 0x200019abc780 with size: 1.008179 MiB 00:08:09.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:09.773 element at address: 0x200018efde00 with size: 1.008179 MiB 00:08:09.773 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:09.773 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:08:09.773 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:09.773 element at address: 0x200003eff100 with size: 1.000549 MiB 00:08:09.773 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61480 00:08:09.773 element at address: 0x200003affb80 with size: 1.000549 MiB 00:08:09.773 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61480 00:08:09.773 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:08:09.773 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61480 00:08:09.773 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:08:09.773 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61480 00:08:09.773 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:08:09.773 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61480 00:08:09.773 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:08:09.773 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:09.773 element at address: 0x200013878680 with size: 0.500549 MiB 00:08:09.773 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:09.773 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:08:09.773 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:09.773 element at address: 0x200003adf740 with size: 0.125549 MiB 00:08:09.773 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61480 00:08:09.773 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:08:09.773 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:09.773 element at address: 0x200028464140 with size: 0.023804 MiB 00:08:09.773 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:09.773 element at address: 0x200003adb500 with size: 0.016174 MiB 00:08:09.773 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61480 00:08:09.773 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:08:09.773 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:09.773 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:08:09.773 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61480 00:08:09.773 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:08:09.773 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61480 00:08:09.773 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:08:09.773 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:09.773 21:30:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:09.773 21:30:30 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61480 00:08:09.773 21:30:30 -- common/autotest_common.sh@936 -- # '[' -z 61480 ']' 00:08:09.773 21:30:30 -- common/autotest_common.sh@940 -- # kill -0 61480 00:08:09.773 21:30:30 -- common/autotest_common.sh@941 -- # uname 00:08:09.773 21:30:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:09.773 21:30:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61480 00:08:09.773 21:30:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:09.773 21:30:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:09.773 killing process with pid 61480 00:08:09.773 21:30:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61480' 00:08:09.774 21:30:30 -- common/autotest_common.sh@955 -- # kill 61480 00:08:09.774 21:30:30 -- common/autotest_common.sh@960 -- # wait 61480 00:08:11.684 00:08:11.684 real 0m3.955s 00:08:11.684 user 0m4.179s 00:08:11.684 sys 0m0.557s 00:08:11.684 21:30:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:11.684 ************************************ 00:08:11.684 END TEST dpdk_mem_utility 00:08:11.684 ************************************ 00:08:11.684 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.684 21:30:32 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:11.684 21:30:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:11.684 21:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.684 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.684 ************************************ 00:08:11.684 START TEST event 00:08:11.684 ************************************ 00:08:11.684 21:30:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:11.684 * Looking for test storage... 00:08:11.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:11.684 21:30:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:11.684 21:30:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:11.684 21:30:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:11.944 21:30:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:11.944 21:30:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:11.944 21:30:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:11.944 21:30:32 -- scripts/common.sh@335 -- # IFS=.-: 00:08:11.944 21:30:32 -- scripts/common.sh@335 -- # read -ra ver1 00:08:11.944 21:30:32 -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.944 21:30:32 -- scripts/common.sh@336 -- # read -ra ver2 00:08:11.944 21:30:32 -- scripts/common.sh@337 -- # local 'op=<' 00:08:11.944 21:30:32 -- scripts/common.sh@339 -- # ver1_l=2 00:08:11.944 21:30:32 -- scripts/common.sh@340 -- # ver2_l=1 00:08:11.944 21:30:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:11.944 21:30:32 -- scripts/common.sh@343 -- # case "$op" in 00:08:11.944 21:30:32 -- scripts/common.sh@344 -- # : 1 00:08:11.944 21:30:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:11.944 21:30:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.944 21:30:32 -- scripts/common.sh@364 -- # decimal 1 00:08:11.944 21:30:32 -- scripts/common.sh@352 -- # local d=1 00:08:11.944 21:30:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.944 21:30:32 -- scripts/common.sh@354 -- # echo 1 00:08:11.944 21:30:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:11.944 21:30:32 -- scripts/common.sh@365 -- # decimal 2 00:08:11.944 21:30:32 -- scripts/common.sh@352 -- # local d=2 00:08:11.944 21:30:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.944 21:30:32 -- scripts/common.sh@354 -- # echo 2 00:08:11.944 21:30:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:11.944 21:30:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:11.944 21:30:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:11.944 21:30:32 -- scripts/common.sh@367 -- # return 0 00:08:11.944 21:30:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:11.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.944 --rc genhtml_branch_coverage=1 00:08:11.944 --rc genhtml_function_coverage=1 00:08:11.944 --rc genhtml_legend=1 00:08:11.944 --rc geninfo_all_blocks=1 00:08:11.944 --rc geninfo_unexecuted_blocks=1 00:08:11.944 00:08:11.944 ' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:11.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.944 --rc genhtml_branch_coverage=1 00:08:11.944 --rc genhtml_function_coverage=1 00:08:11.944 --rc genhtml_legend=1 00:08:11.944 --rc geninfo_all_blocks=1 00:08:11.944 --rc geninfo_unexecuted_blocks=1 00:08:11.944 00:08:11.944 ' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:11.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.944 --rc genhtml_branch_coverage=1 00:08:11.944 --rc genhtml_function_coverage=1 00:08:11.944 --rc genhtml_legend=1 00:08:11.944 --rc geninfo_all_blocks=1 00:08:11.944 --rc geninfo_unexecuted_blocks=1 00:08:11.944 00:08:11.944 ' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:11.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.944 --rc genhtml_branch_coverage=1 00:08:11.944 --rc genhtml_function_coverage=1 00:08:11.944 --rc genhtml_legend=1 00:08:11.944 --rc geninfo_all_blocks=1 00:08:11.944 --rc geninfo_unexecuted_blocks=1 00:08:11.944 00:08:11.944 ' 00:08:11.944 21:30:32 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:11.944 21:30:32 -- bdev/nbd_common.sh@6 -- # set -e 00:08:11.944 21:30:32 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:11.944 21:30:32 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:11.944 21:30:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.944 21:30:32 -- common/autotest_common.sh@10 -- # set +x 00:08:11.944 ************************************ 00:08:11.944 START TEST event_perf 00:08:11.944 ************************************ 00:08:11.944 21:30:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:11.944 Running I/O for 1 seconds...[2024-12-06 21:30:32.309402] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:11.944 [2024-12-06 21:30:32.309588] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61589 ] 00:08:12.203 [2024-12-06 21:30:32.479549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:12.203 [2024-12-06 21:30:32.648570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.203 [2024-12-06 21:30:32.648631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.203 [2024-12-06 21:30:32.648757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.203 [2024-12-06 21:30:32.648773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.579 Running I/O for 1 seconds... 00:08:13.579 lcore 0: 203998 00:08:13.579 lcore 1: 203996 00:08:13.579 lcore 2: 203998 00:08:13.579 lcore 3: 203999 00:08:13.579 done. 00:08:13.579 00:08:13.579 real 0m1.746s 00:08:13.579 user 0m4.534s 00:08:13.579 sys 0m0.112s 00:08:13.579 21:30:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:13.579 ************************************ 00:08:13.579 21:30:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.579 END TEST event_perf 00:08:13.579 ************************************ 00:08:13.579 21:30:34 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:13.579 21:30:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:13.579 21:30:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.579 21:30:34 -- common/autotest_common.sh@10 -- # set +x 00:08:13.579 ************************************ 00:08:13.579 START TEST event_reactor 00:08:13.579 ************************************ 00:08:13.579 21:30:34 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:13.838 [2024-12-06 21:30:34.103233] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:13.838 [2024-12-06 21:30:34.103460] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61623 ] 00:08:13.838 [2024-12-06 21:30:34.273027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.098 [2024-12-06 21:30:34.439870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.476 test_start 00:08:15.476 oneshot 00:08:15.476 tick 100 00:08:15.476 tick 100 00:08:15.476 tick 250 00:08:15.476 tick 100 00:08:15.476 tick 100 00:08:15.476 tick 250 00:08:15.476 tick 100 00:08:15.476 tick 500 00:08:15.476 tick 100 00:08:15.476 tick 100 00:08:15.476 tick 250 00:08:15.476 tick 100 00:08:15.476 tick 100 00:08:15.476 test_end 00:08:15.476 00:08:15.476 real 0m1.824s 00:08:15.476 user 0m1.603s 00:08:15.476 sys 0m0.119s 00:08:15.476 21:30:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:15.476 21:30:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 ************************************ 00:08:15.476 END TEST event_reactor 00:08:15.476 ************************************ 00:08:15.476 21:30:35 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:15.476 21:30:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:15.476 21:30:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:15.476 21:30:35 -- common/autotest_common.sh@10 -- # set +x 00:08:15.476 ************************************ 00:08:15.476 START TEST event_reactor_perf 00:08:15.476 ************************************ 00:08:15.476 21:30:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:15.734 [2024-12-06 21:30:35.983155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:15.734 [2024-12-06 21:30:35.983313] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61665 ] 00:08:15.734 [2024-12-06 21:30:36.161030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.993 [2024-12-06 21:30:36.375096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.370 test_start 00:08:17.370 test_end 00:08:17.370 Performance: 252149 events per second 00:08:17.370 00:08:17.370 real 0m1.872s 00:08:17.370 user 0m1.665s 00:08:17.370 sys 0m0.105s 00:08:17.370 21:30:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.370 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 ************************************ 00:08:17.370 END TEST event_reactor_perf 00:08:17.370 ************************************ 00:08:17.370 21:30:37 -- event/event.sh@49 -- # uname -s 00:08:17.370 21:30:37 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:17.370 21:30:37 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:17.370 21:30:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:17.371 21:30:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.371 21:30:37 -- common/autotest_common.sh@10 -- # set +x 00:08:17.629 ************************************ 00:08:17.629 START TEST event_scheduler 00:08:17.629 ************************************ 00:08:17.629 21:30:37 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:17.629 * Looking for test storage... 00:08:17.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:17.629 21:30:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:17.629 21:30:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:17.629 21:30:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:17.629 21:30:38 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:17.629 21:30:38 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:17.629 21:30:38 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:17.629 21:30:38 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:17.629 21:30:38 -- scripts/common.sh@335 -- # IFS=.-: 00:08:17.629 21:30:38 -- scripts/common.sh@335 -- # read -ra ver1 00:08:17.629 21:30:38 -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.629 21:30:38 -- scripts/common.sh@336 -- # read -ra ver2 00:08:17.629 21:30:38 -- scripts/common.sh@337 -- # local 'op=<' 00:08:17.629 21:30:38 -- scripts/common.sh@339 -- # ver1_l=2 00:08:17.629 21:30:38 -- scripts/common.sh@340 -- # ver2_l=1 00:08:17.629 21:30:38 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:17.629 21:30:38 -- scripts/common.sh@343 -- # case "$op" in 00:08:17.629 21:30:38 -- scripts/common.sh@344 -- # : 1 00:08:17.629 21:30:38 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:17.629 21:30:38 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.629 21:30:38 -- scripts/common.sh@364 -- # decimal 1 00:08:17.629 21:30:38 -- scripts/common.sh@352 -- # local d=1 00:08:17.629 21:30:38 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.629 21:30:38 -- scripts/common.sh@354 -- # echo 1 00:08:17.629 21:30:38 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:17.629 21:30:38 -- scripts/common.sh@365 -- # decimal 2 00:08:17.629 21:30:38 -- scripts/common.sh@352 -- # local d=2 00:08:17.629 21:30:38 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.629 21:30:38 -- scripts/common.sh@354 -- # echo 2 00:08:17.629 21:30:38 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:17.629 21:30:38 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:17.629 21:30:38 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:17.629 21:30:38 -- scripts/common.sh@367 -- # return 0 00:08:17.629 21:30:38 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.629 21:30:38 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:17.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.629 --rc genhtml_branch_coverage=1 00:08:17.629 --rc genhtml_function_coverage=1 00:08:17.629 --rc genhtml_legend=1 00:08:17.629 --rc geninfo_all_blocks=1 00:08:17.629 --rc geninfo_unexecuted_blocks=1 00:08:17.629 00:08:17.629 ' 00:08:17.629 21:30:38 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:17.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.629 --rc genhtml_branch_coverage=1 00:08:17.629 --rc genhtml_function_coverage=1 00:08:17.629 --rc genhtml_legend=1 00:08:17.629 --rc geninfo_all_blocks=1 00:08:17.629 --rc geninfo_unexecuted_blocks=1 00:08:17.629 00:08:17.629 ' 00:08:17.629 21:30:38 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:17.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.629 --rc genhtml_branch_coverage=1 00:08:17.629 --rc genhtml_function_coverage=1 00:08:17.629 --rc genhtml_legend=1 00:08:17.629 --rc geninfo_all_blocks=1 00:08:17.629 --rc geninfo_unexecuted_blocks=1 00:08:17.629 00:08:17.629 ' 00:08:17.629 21:30:38 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:17.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.629 --rc genhtml_branch_coverage=1 00:08:17.629 --rc genhtml_function_coverage=1 00:08:17.629 --rc genhtml_legend=1 00:08:17.629 --rc geninfo_all_blocks=1 00:08:17.629 --rc geninfo_unexecuted_blocks=1 00:08:17.629 00:08:17.629 ' 00:08:17.629 21:30:38 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:17.629 21:30:38 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61740 00:08:17.629 21:30:38 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:17.629 21:30:38 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.629 21:30:38 -- scheduler/scheduler.sh@37 -- # waitforlisten 61740 00:08:17.629 21:30:38 -- common/autotest_common.sh@829 -- # '[' -z 61740 ']' 00:08:17.629 21:30:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.629 21:30:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.629 21:30:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.629 21:30:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.629 21:30:38 -- common/autotest_common.sh@10 -- # set +x 00:08:17.888 [2024-12-06 21:30:38.151214] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:17.888 [2024-12-06 21:30:38.151800] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61740 ] 00:08:17.888 [2024-12-06 21:30:38.331794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.146 [2024-12-06 21:30:38.538461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.146 [2024-12-06 21:30:38.538583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.146 [2024-12-06 21:30:38.538658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.146 [2024-12-06 21:30:38.538682] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.714 21:30:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.714 21:30:39 -- common/autotest_common.sh@862 -- # return 0 00:08:18.714 21:30:39 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:18.714 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.714 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.714 POWER: Env isn't set yet! 00:08:18.714 POWER: Attempting to initialise ACPI cpufreq power management... 00:08:18.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:18.714 POWER: Cannot set governor of lcore 0 to userspace 00:08:18.714 POWER: Attempting to initialise PSTAT power management... 00:08:18.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:18.714 POWER: Cannot set governor of lcore 0 to performance 00:08:18.714 POWER: Attempting to initialise AMD PSTATE power management... 00:08:18.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:18.714 POWER: Cannot set governor of lcore 0 to userspace 00:08:18.714 POWER: Attempting to initialise CPPC power management... 00:08:18.714 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:18.714 POWER: Cannot set governor of lcore 0 to userspace 00:08:18.714 POWER: Attempting to initialise VM power management... 00:08:18.714 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:18.714 POWER: Unable to set Power Management Environment for lcore 0 00:08:18.714 [2024-12-06 21:30:39.092598] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:08:18.714 [2024-12-06 21:30:39.092621] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:08:18.714 [2024-12-06 21:30:39.092635] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:08:18.714 [2024-12-06 21:30:39.092659] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:18.714 [2024-12-06 21:30:39.092675] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:18.714 [2024-12-06 21:30:39.092686] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:18.714 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.714 21:30:39 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:18.714 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.714 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 [2024-12-06 21:30:39.402881] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:18.973 21:30:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:18.973 21:30:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 ************************************ 00:08:18.973 START TEST scheduler_create_thread 00:08:18.973 ************************************ 00:08:18.973 21:30:39 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 2 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 3 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 4 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 5 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 6 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:18.973 7 00:08:18.973 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.973 21:30:39 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:18.973 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.973 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.232 8 00:08:19.232 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.232 21:30:39 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:19.232 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.232 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.232 9 00:08:19.232 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.232 21:30:39 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:19.232 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.232 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.232 10 00:08:19.232 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.232 21:30:39 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:19.232 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.232 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.232 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.232 21:30:39 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:19.232 21:30:39 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:19.232 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.232 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:19.232 21:30:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.233 21:30:39 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:19.233 21:30:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.233 21:30:39 -- common/autotest_common.sh@10 -- # set +x 00:08:20.170 21:30:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.170 21:30:40 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:20.170 21:30:40 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:20.170 21:30:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.170 21:30:40 -- common/autotest_common.sh@10 -- # set +x 00:08:21.107 ************************************ 00:08:21.107 END TEST scheduler_create_thread 00:08:21.107 ************************************ 00:08:21.107 21:30:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.107 00:08:21.107 real 0m2.138s 00:08:21.107 user 0m0.021s 00:08:21.107 sys 0m0.004s 00:08:21.107 21:30:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:21.107 21:30:41 -- common/autotest_common.sh@10 -- # set +x 00:08:21.107 21:30:41 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:21.107 21:30:41 -- scheduler/scheduler.sh@46 -- # killprocess 61740 00:08:21.107 21:30:41 -- common/autotest_common.sh@936 -- # '[' -z 61740 ']' 00:08:21.107 21:30:41 -- common/autotest_common.sh@940 -- # kill -0 61740 00:08:21.107 21:30:41 -- common/autotest_common.sh@941 -- # uname 00:08:21.107 21:30:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:21.107 21:30:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61740 00:08:21.366 killing process with pid 61740 00:08:21.366 21:30:41 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:08:21.366 21:30:41 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:08:21.366 21:30:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61740' 00:08:21.366 21:30:41 -- common/autotest_common.sh@955 -- # kill 61740 00:08:21.366 21:30:41 -- common/autotest_common.sh@960 -- # wait 61740 00:08:21.625 [2024-12-06 21:30:42.032532] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:23.003 00:08:23.003 real 0m5.428s 00:08:23.003 user 0m8.905s 00:08:23.003 sys 0m0.524s 00:08:23.003 21:30:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:23.003 21:30:43 -- common/autotest_common.sh@10 -- # set +x 00:08:23.003 ************************************ 00:08:23.003 END TEST event_scheduler 00:08:23.003 ************************************ 00:08:23.003 21:30:43 -- event/event.sh@51 -- # modprobe -n nbd 00:08:23.003 21:30:43 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:23.003 21:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:23.003 21:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.003 21:30:43 -- common/autotest_common.sh@10 -- # set +x 00:08:23.003 ************************************ 00:08:23.003 START TEST app_repeat 00:08:23.003 ************************************ 00:08:23.003 21:30:43 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:08:23.003 21:30:43 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.003 21:30:43 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.003 21:30:43 -- event/event.sh@13 -- # local nbd_list 00:08:23.003 21:30:43 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:23.003 21:30:43 -- event/event.sh@14 -- # local bdev_list 00:08:23.003 21:30:43 -- event/event.sh@15 -- # local repeat_times=4 00:08:23.003 21:30:43 -- event/event.sh@17 -- # modprobe nbd 00:08:23.003 21:30:43 -- event/event.sh@19 -- # repeat_pid=61846 00:08:23.003 21:30:43 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:23.003 21:30:43 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:23.003 Process app_repeat pid: 61846 00:08:23.003 21:30:43 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61846' 00:08:23.003 21:30:43 -- event/event.sh@23 -- # for i in {0..2} 00:08:23.003 spdk_app_start Round 0 00:08:23.003 21:30:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:23.003 21:30:43 -- event/event.sh@25 -- # waitforlisten 61846 /var/tmp/spdk-nbd.sock 00:08:23.003 21:30:43 -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:08:23.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:23.003 21:30:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:23.003 21:30:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.003 21:30:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:23.003 21:30:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.003 21:30:43 -- common/autotest_common.sh@10 -- # set +x 00:08:23.003 [2024-12-06 21:30:43.413870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:23.003 [2024-12-06 21:30:43.414022] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61846 ] 00:08:23.263 [2024-12-06 21:30:43.592703] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.523 [2024-12-06 21:30:43.825383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.523 [2024-12-06 21:30:43.825399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.088 21:30:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.088 21:30:44 -- common/autotest_common.sh@862 -- # return 0 00:08:24.088 21:30:44 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.347 Malloc0 00:08:24.347 21:30:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:24.606 Malloc1 00:08:24.606 21:30:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@12 -- # local i 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.606 21:30:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:24.864 /dev/nbd0 00:08:24.864 21:30:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:24.864 21:30:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:24.864 21:30:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:24.864 21:30:45 -- common/autotest_common.sh@867 -- # local i 00:08:24.864 21:30:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:24.864 21:30:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:24.864 21:30:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:24.864 21:30:45 -- common/autotest_common.sh@871 -- # break 00:08:24.864 21:30:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:24.864 21:30:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:24.864 21:30:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:24.864 1+0 records in 00:08:24.864 1+0 records out 00:08:24.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290064 s, 14.1 MB/s 00:08:24.864 21:30:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.865 21:30:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:24.865 21:30:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:24.865 21:30:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:24.865 21:30:45 -- common/autotest_common.sh@887 -- # return 0 00:08:24.865 21:30:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:24.865 21:30:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:24.865 21:30:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:25.123 /dev/nbd1 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.123 21:30:45 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:25.123 21:30:45 -- common/autotest_common.sh@867 -- # local i 00:08:25.123 21:30:45 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:25.123 21:30:45 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:25.123 21:30:45 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:25.123 21:30:45 -- common/autotest_common.sh@871 -- # break 00:08:25.123 21:30:45 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:25.123 21:30:45 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:25.123 21:30:45 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:25.123 1+0 records in 00:08:25.123 1+0 records out 00:08:25.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303685 s, 13.5 MB/s 00:08:25.123 21:30:45 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.123 21:30:45 -- common/autotest_common.sh@884 -- # size=4096 00:08:25.123 21:30:45 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:25.123 21:30:45 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:25.123 21:30:45 -- common/autotest_common.sh@887 -- # return 0 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.123 21:30:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:25.381 { 00:08:25.381 "nbd_device": "/dev/nbd0", 00:08:25.381 "bdev_name": "Malloc0" 00:08:25.381 }, 00:08:25.381 { 00:08:25.381 "nbd_device": "/dev/nbd1", 00:08:25.381 "bdev_name": "Malloc1" 00:08:25.381 } 00:08:25.381 ]' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:25.381 { 00:08:25.381 "nbd_device": "/dev/nbd0", 00:08:25.381 "bdev_name": "Malloc0" 00:08:25.381 }, 00:08:25.381 { 00:08:25.381 "nbd_device": "/dev/nbd1", 00:08:25.381 "bdev_name": "Malloc1" 00:08:25.381 } 00:08:25.381 ]' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:25.381 /dev/nbd1' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:25.381 /dev/nbd1' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@65 -- # count=2 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@95 -- # count=2 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:25.381 256+0 records in 00:08:25.381 256+0 records out 00:08:25.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102535 s, 102 MB/s 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:25.381 256+0 records in 00:08:25.381 256+0 records out 00:08:25.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273026 s, 38.4 MB/s 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:25.381 256+0 records in 00:08:25.381 256+0 records out 00:08:25.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0316903 s, 33.1 MB/s 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@51 -- # local i 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.381 21:30:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@41 -- # break 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:25.639 21:30:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@41 -- # break 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@45 -- # return 0 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:25.897 21:30:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@65 -- # true 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@65 -- # count=0 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@104 -- # count=0 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:26.208 21:30:46 -- bdev/nbd_common.sh@109 -- # return 0 00:08:26.208 21:30:46 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:26.775 21:30:46 -- event/event.sh@35 -- # sleep 3 00:08:27.709 [2024-12-06 21:30:48.043415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:27.968 [2024-12-06 21:30:48.214983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.968 [2024-12-06 21:30:48.214985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.968 [2024-12-06 21:30:48.378169] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:27.968 [2024-12-06 21:30:48.378233] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:29.874 spdk_app_start Round 1 00:08:29.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:29.874 21:30:49 -- event/event.sh@23 -- # for i in {0..2} 00:08:29.874 21:30:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:29.874 21:30:49 -- event/event.sh@25 -- # waitforlisten 61846 /var/tmp/spdk-nbd.sock 00:08:29.874 21:30:49 -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:08:29.874 21:30:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:29.874 21:30:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:29.874 21:30:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:29.874 21:30:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:29.874 21:30:49 -- common/autotest_common.sh@10 -- # set +x 00:08:29.874 21:30:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.874 21:30:50 -- common/autotest_common.sh@862 -- # return 0 00:08:29.874 21:30:50 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.133 Malloc0 00:08:30.133 21:30:50 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:30.392 Malloc1 00:08:30.392 21:30:50 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@12 -- # local i 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.392 21:30:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:30.651 /dev/nbd0 00:08:30.651 21:30:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:30.651 21:30:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:30.651 21:30:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:30.651 21:30:51 -- common/autotest_common.sh@867 -- # local i 00:08:30.651 21:30:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.651 21:30:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.651 21:30:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:30.651 21:30:51 -- common/autotest_common.sh@871 -- # break 00:08:30.651 21:30:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.651 21:30:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.651 21:30:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.651 1+0 records in 00:08:30.651 1+0 records out 00:08:30.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735488 s, 5.6 MB/s 00:08:30.651 21:30:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.651 21:30:51 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.651 21:30:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.651 21:30:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.651 21:30:51 -- common/autotest_common.sh@887 -- # return 0 00:08:30.651 21:30:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.651 21:30:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.651 21:30:51 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:30.910 /dev/nbd1 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:30.910 21:30:51 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:30.910 21:30:51 -- common/autotest_common.sh@867 -- # local i 00:08:30.910 21:30:51 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:30.910 21:30:51 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:30.910 21:30:51 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:30.910 21:30:51 -- common/autotest_common.sh@871 -- # break 00:08:30.910 21:30:51 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:30.910 21:30:51 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:30.910 21:30:51 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:30.910 1+0 records in 00:08:30.910 1+0 records out 00:08:30.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245712 s, 16.7 MB/s 00:08:30.910 21:30:51 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.910 21:30:51 -- common/autotest_common.sh@884 -- # size=4096 00:08:30.910 21:30:51 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:30.910 21:30:51 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:30.910 21:30:51 -- common/autotest_common.sh@887 -- # return 0 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.910 21:30:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:31.168 { 00:08:31.168 "nbd_device": "/dev/nbd0", 00:08:31.168 "bdev_name": "Malloc0" 00:08:31.168 }, 00:08:31.168 { 00:08:31.168 "nbd_device": "/dev/nbd1", 00:08:31.168 "bdev_name": "Malloc1" 00:08:31.168 } 00:08:31.168 ]' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:31.168 { 00:08:31.168 "nbd_device": "/dev/nbd0", 00:08:31.168 "bdev_name": "Malloc0" 00:08:31.168 }, 00:08:31.168 { 00:08:31.168 "nbd_device": "/dev/nbd1", 00:08:31.168 "bdev_name": "Malloc1" 00:08:31.168 } 00:08:31.168 ]' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:31.168 /dev/nbd1' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:31.168 /dev/nbd1' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@65 -- # count=2 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@95 -- # count=2 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:31.168 256+0 records in 00:08:31.168 256+0 records out 00:08:31.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554459 s, 189 MB/s 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:31.168 256+0 records in 00:08:31.168 256+0 records out 00:08:31.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027661 s, 37.9 MB/s 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:31.168 256+0 records in 00:08:31.168 256+0 records out 00:08:31.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333683 s, 31.4 MB/s 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.168 21:30:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@51 -- # local i 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.427 21:30:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:31.685 21:30:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:31.685 21:30:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@41 -- # break 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:31.686 21:30:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@41 -- # break 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@45 -- # return 0 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.686 21:30:52 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.943 21:30:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@65 -- # true 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.944 21:30:52 -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.944 21:30:52 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:32.523 21:30:52 -- event/event.sh@35 -- # sleep 3 00:08:33.898 [2024-12-06 21:30:54.108308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.898 [2024-12-06 21:30:54.319057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.898 [2024-12-06 21:30:54.319059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.157 [2024-12-06 21:30:54.519420] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:34.157 [2024-12-06 21:30:54.519508] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:35.530 spdk_app_start Round 2 00:08:35.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.530 21:30:55 -- event/event.sh@23 -- # for i in {0..2} 00:08:35.530 21:30:55 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:35.530 21:30:55 -- event/event.sh@25 -- # waitforlisten 61846 /var/tmp/spdk-nbd.sock 00:08:35.530 21:30:55 -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:08:35.530 21:30:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.530 21:30:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.530 21:30:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.530 21:30:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.530 21:30:55 -- common/autotest_common.sh@10 -- # set +x 00:08:35.787 21:30:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:35.787 21:30:56 -- common/autotest_common.sh@862 -- # return 0 00:08:35.787 21:30:56 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.045 Malloc0 00:08:36.045 21:30:56 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:36.303 Malloc1 00:08:36.303 21:30:56 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@12 -- # local i 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.303 21:30:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:36.561 /dev/nbd0 00:08:36.561 21:30:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:36.561 21:30:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:36.561 21:30:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:08:36.561 21:30:56 -- common/autotest_common.sh@867 -- # local i 00:08:36.561 21:30:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:36.561 21:30:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:36.561 21:30:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:08:36.561 21:30:56 -- common/autotest_common.sh@871 -- # break 00:08:36.561 21:30:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:36.561 21:30:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:36.561 21:30:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:36.561 1+0 records in 00:08:36.561 1+0 records out 00:08:36.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022994 s, 17.8 MB/s 00:08:36.561 21:30:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:36.561 21:30:56 -- common/autotest_common.sh@884 -- # size=4096 00:08:36.561 21:30:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:36.561 21:30:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:36.561 21:30:56 -- common/autotest_common.sh@887 -- # return 0 00:08:36.561 21:30:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:36.561 21:30:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.561 21:30:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:36.819 /dev/nbd1 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:36.819 21:30:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:08:36.819 21:30:57 -- common/autotest_common.sh@867 -- # local i 00:08:36.819 21:30:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:08:36.819 21:30:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:08:36.819 21:30:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:08:36.819 21:30:57 -- common/autotest_common.sh@871 -- # break 00:08:36.819 21:30:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:08:36.819 21:30:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:08:36.819 21:30:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:36.819 1+0 records in 00:08:36.819 1+0 records out 00:08:36.819 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257638 s, 15.9 MB/s 00:08:36.819 21:30:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:36.819 21:30:57 -- common/autotest_common.sh@884 -- # size=4096 00:08:36.819 21:30:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:36.819 21:30:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:08:36.819 21:30:57 -- common/autotest_common.sh@887 -- # return 0 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.819 21:30:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:37.077 { 00:08:37.077 "nbd_device": "/dev/nbd0", 00:08:37.077 "bdev_name": "Malloc0" 00:08:37.077 }, 00:08:37.077 { 00:08:37.077 "nbd_device": "/dev/nbd1", 00:08:37.077 "bdev_name": "Malloc1" 00:08:37.077 } 00:08:37.077 ]' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:37.077 { 00:08:37.077 "nbd_device": "/dev/nbd0", 00:08:37.077 "bdev_name": "Malloc0" 00:08:37.077 }, 00:08:37.077 { 00:08:37.077 "nbd_device": "/dev/nbd1", 00:08:37.077 "bdev_name": "Malloc1" 00:08:37.077 } 00:08:37.077 ]' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:37.077 /dev/nbd1' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:37.077 /dev/nbd1' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@65 -- # count=2 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@95 -- # count=2 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:37.077 256+0 records in 00:08:37.077 256+0 records out 00:08:37.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680831 s, 154 MB/s 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:37.077 256+0 records in 00:08:37.077 256+0 records out 00:08:37.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293207 s, 35.8 MB/s 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:37.077 21:30:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:37.077 256+0 records in 00:08:37.077 256+0 records out 00:08:37.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314164 s, 33.4 MB/s 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@51 -- # local i 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.335 21:30:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@41 -- # break 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:37.594 21:30:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@41 -- # break 00:08:37.851 21:30:58 -- bdev/nbd_common.sh@45 -- # return 0 00:08:37.852 21:30:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.852 21:30:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.852 21:30:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.109 21:30:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:38.109 21:30:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:38.109 21:30:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@65 -- # true 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@65 -- # count=0 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@104 -- # count=0 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:38.110 21:30:58 -- bdev/nbd_common.sh@109 -- # return 0 00:08:38.110 21:30:58 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:38.368 21:30:58 -- event/event.sh@35 -- # sleep 3 00:08:39.743 [2024-12-06 21:31:00.120352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.001 [2024-12-06 21:31:00.320290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.001 [2024-12-06 21:31:00.320294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.260 [2024-12-06 21:31:00.514807] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:40.260 [2024-12-06 21:31:00.514875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:41.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:41.635 21:31:01 -- event/event.sh@38 -- # waitforlisten 61846 /var/tmp/spdk-nbd.sock 00:08:41.635 21:31:01 -- common/autotest_common.sh@829 -- # '[' -z 61846 ']' 00:08:41.635 21:31:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:41.635 21:31:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.635 21:31:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:41.635 21:31:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.635 21:31:01 -- common/autotest_common.sh@10 -- # set +x 00:08:41.635 21:31:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:41.635 21:31:02 -- common/autotest_common.sh@862 -- # return 0 00:08:41.635 21:31:02 -- event/event.sh@39 -- # killprocess 61846 00:08:41.635 21:31:02 -- common/autotest_common.sh@936 -- # '[' -z 61846 ']' 00:08:41.635 21:31:02 -- common/autotest_common.sh@940 -- # kill -0 61846 00:08:41.635 21:31:02 -- common/autotest_common.sh@941 -- # uname 00:08:41.635 21:31:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.635 21:31:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61846 00:08:41.894 killing process with pid 61846 00:08:41.894 21:31:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.894 21:31:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.894 21:31:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61846' 00:08:41.894 21:31:02 -- common/autotest_common.sh@955 -- # kill 61846 00:08:41.894 21:31:02 -- common/autotest_common.sh@960 -- # wait 61846 00:08:42.854 spdk_app_start is called in Round 0. 00:08:42.854 Shutdown signal received, stop current app iteration 00:08:42.854 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:42.854 spdk_app_start is called in Round 1. 00:08:42.854 Shutdown signal received, stop current app iteration 00:08:42.854 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:42.854 spdk_app_start is called in Round 2. 00:08:42.854 Shutdown signal received, stop current app iteration 00:08:42.854 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:08:42.854 spdk_app_start is called in Round 3. 00:08:42.854 Shutdown signal received, stop current app iteration 00:08:42.854 ************************************ 00:08:42.854 END TEST app_repeat 00:08:42.854 ************************************ 00:08:42.854 21:31:03 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:42.854 21:31:03 -- event/event.sh@42 -- # return 0 00:08:42.854 00:08:42.854 real 0m19.816s 00:08:42.854 user 0m42.190s 00:08:42.854 sys 0m2.772s 00:08:42.854 21:31:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:42.854 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:42.854 21:31:03 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:42.854 21:31:03 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:42.854 21:31:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:42.854 21:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.854 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:42.854 ************************************ 00:08:42.854 START TEST cpu_locks 00:08:42.854 ************************************ 00:08:42.854 21:31:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:42.854 * Looking for test storage... 00:08:42.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:42.854 21:31:03 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:42.854 21:31:03 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:42.854 21:31:03 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:43.112 21:31:03 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:43.112 21:31:03 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:43.112 21:31:03 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:43.112 21:31:03 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:43.112 21:31:03 -- scripts/common.sh@335 -- # IFS=.-: 00:08:43.112 21:31:03 -- scripts/common.sh@335 -- # read -ra ver1 00:08:43.112 21:31:03 -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.112 21:31:03 -- scripts/common.sh@336 -- # read -ra ver2 00:08:43.112 21:31:03 -- scripts/common.sh@337 -- # local 'op=<' 00:08:43.112 21:31:03 -- scripts/common.sh@339 -- # ver1_l=2 00:08:43.112 21:31:03 -- scripts/common.sh@340 -- # ver2_l=1 00:08:43.113 21:31:03 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:43.113 21:31:03 -- scripts/common.sh@343 -- # case "$op" in 00:08:43.113 21:31:03 -- scripts/common.sh@344 -- # : 1 00:08:43.113 21:31:03 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:43.113 21:31:03 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.113 21:31:03 -- scripts/common.sh@364 -- # decimal 1 00:08:43.113 21:31:03 -- scripts/common.sh@352 -- # local d=1 00:08:43.113 21:31:03 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.113 21:31:03 -- scripts/common.sh@354 -- # echo 1 00:08:43.113 21:31:03 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:43.113 21:31:03 -- scripts/common.sh@365 -- # decimal 2 00:08:43.113 21:31:03 -- scripts/common.sh@352 -- # local d=2 00:08:43.113 21:31:03 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.113 21:31:03 -- scripts/common.sh@354 -- # echo 2 00:08:43.113 21:31:03 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:43.113 21:31:03 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:43.113 21:31:03 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:43.113 21:31:03 -- scripts/common.sh@367 -- # return 0 00:08:43.113 21:31:03 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.113 21:31:03 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:43.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.113 --rc genhtml_branch_coverage=1 00:08:43.113 --rc genhtml_function_coverage=1 00:08:43.113 --rc genhtml_legend=1 00:08:43.113 --rc geninfo_all_blocks=1 00:08:43.113 --rc geninfo_unexecuted_blocks=1 00:08:43.113 00:08:43.113 ' 00:08:43.113 21:31:03 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:43.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.113 --rc genhtml_branch_coverage=1 00:08:43.113 --rc genhtml_function_coverage=1 00:08:43.113 --rc genhtml_legend=1 00:08:43.113 --rc geninfo_all_blocks=1 00:08:43.113 --rc geninfo_unexecuted_blocks=1 00:08:43.113 00:08:43.113 ' 00:08:43.113 21:31:03 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:43.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.113 --rc genhtml_branch_coverage=1 00:08:43.113 --rc genhtml_function_coverage=1 00:08:43.113 --rc genhtml_legend=1 00:08:43.113 --rc geninfo_all_blocks=1 00:08:43.113 --rc geninfo_unexecuted_blocks=1 00:08:43.113 00:08:43.113 ' 00:08:43.113 21:31:03 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:43.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.113 --rc genhtml_branch_coverage=1 00:08:43.113 --rc genhtml_function_coverage=1 00:08:43.113 --rc genhtml_legend=1 00:08:43.113 --rc geninfo_all_blocks=1 00:08:43.113 --rc geninfo_unexecuted_blocks=1 00:08:43.113 00:08:43.113 ' 00:08:43.113 21:31:03 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:43.113 21:31:03 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:43.113 21:31:03 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:43.113 21:31:03 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:43.113 21:31:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.113 21:31:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.113 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.113 ************************************ 00:08:43.113 START TEST default_locks 00:08:43.113 ************************************ 00:08:43.113 21:31:03 -- common/autotest_common.sh@1114 -- # default_locks 00:08:43.113 21:31:03 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62347 00:08:43.113 21:31:03 -- event/cpu_locks.sh@47 -- # waitforlisten 62347 00:08:43.113 21:31:03 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.113 21:31:03 -- common/autotest_common.sh@829 -- # '[' -z 62347 ']' 00:08:43.113 21:31:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.113 21:31:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.113 21:31:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.113 21:31:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.113 21:31:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.113 [2024-12-06 21:31:03.477750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.113 [2024-12-06 21:31:03.477925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62347 ] 00:08:43.371 [2024-12-06 21:31:03.650616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.371 [2024-12-06 21:31:03.811484] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:43.371 [2024-12-06 21:31:03.811754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.776 21:31:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.776 21:31:05 -- common/autotest_common.sh@862 -- # return 0 00:08:44.776 21:31:05 -- event/cpu_locks.sh@49 -- # locks_exist 62347 00:08:44.776 21:31:05 -- event/cpu_locks.sh@22 -- # lslocks -p 62347 00:08:44.776 21:31:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:45.038 21:31:05 -- event/cpu_locks.sh@50 -- # killprocess 62347 00:08:45.038 21:31:05 -- common/autotest_common.sh@936 -- # '[' -z 62347 ']' 00:08:45.038 21:31:05 -- common/autotest_common.sh@940 -- # kill -0 62347 00:08:45.038 21:31:05 -- common/autotest_common.sh@941 -- # uname 00:08:45.038 21:31:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:45.038 21:31:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62347 00:08:45.038 killing process with pid 62347 00:08:45.038 21:31:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:45.038 21:31:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:45.038 21:31:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62347' 00:08:45.038 21:31:05 -- common/autotest_common.sh@955 -- # kill 62347 00:08:45.038 21:31:05 -- common/autotest_common.sh@960 -- # wait 62347 00:08:46.939 21:31:07 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62347 00:08:46.939 21:31:07 -- common/autotest_common.sh@650 -- # local es=0 00:08:46.939 21:31:07 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62347 00:08:46.939 21:31:07 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:46.939 21:31:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.939 21:31:07 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:46.939 21:31:07 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:46.939 21:31:07 -- common/autotest_common.sh@653 -- # waitforlisten 62347 00:08:46.939 21:31:07 -- common/autotest_common.sh@829 -- # '[' -z 62347 ']' 00:08:46.939 21:31:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.939 21:31:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.939 21:31:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.939 21:31:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.939 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62347) - No such process 00:08:46.939 ERROR: process (pid: 62347) is no longer running 00:08:46.939 21:31:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.939 21:31:07 -- common/autotest_common.sh@862 -- # return 1 00:08:46.939 21:31:07 -- common/autotest_common.sh@653 -- # es=1 00:08:46.939 21:31:07 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:46.939 21:31:07 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:46.939 21:31:07 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:46.939 21:31:07 -- event/cpu_locks.sh@54 -- # no_locks 00:08:46.939 21:31:07 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:46.939 21:31:07 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:46.939 21:31:07 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:46.939 00:08:46.939 real 0m3.925s 00:08:46.939 user 0m4.083s 00:08:46.939 sys 0m0.621s 00:08:46.939 21:31:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:46.939 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 ************************************ 00:08:46.939 END TEST default_locks 00:08:46.939 ************************************ 00:08:46.939 21:31:07 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:46.939 21:31:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.939 21:31:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.939 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:08:46.939 ************************************ 00:08:46.939 START TEST default_locks_via_rpc 00:08:46.939 ************************************ 00:08:46.939 21:31:07 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:08:46.939 21:31:07 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62419 00:08:46.939 21:31:07 -- event/cpu_locks.sh@63 -- # waitforlisten 62419 00:08:46.939 21:31:07 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:46.939 21:31:07 -- common/autotest_common.sh@829 -- # '[' -z 62419 ']' 00:08:46.939 21:31:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.939 21:31:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.939 21:31:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.939 21:31:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.939 21:31:07 -- common/autotest_common.sh@10 -- # set +x 00:08:47.198 [2024-12-06 21:31:07.451712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:47.198 [2024-12-06 21:31:07.451901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62419 ] 00:08:47.198 [2024-12-06 21:31:07.623964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.457 [2024-12-06 21:31:07.798777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:47.457 [2024-12-06 21:31:07.799008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.853 21:31:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:48.853 21:31:09 -- common/autotest_common.sh@862 -- # return 0 00:08:48.853 21:31:09 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:48.853 21:31:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.853 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:08:48.853 21:31:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.853 21:31:09 -- event/cpu_locks.sh@67 -- # no_locks 00:08:48.853 21:31:09 -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:48.853 21:31:09 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:48.853 21:31:09 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:48.853 21:31:09 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:48.853 21:31:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.853 21:31:09 -- common/autotest_common.sh@10 -- # set +x 00:08:48.853 21:31:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.853 21:31:09 -- event/cpu_locks.sh@71 -- # locks_exist 62419 00:08:48.853 21:31:09 -- event/cpu_locks.sh@22 -- # lslocks -p 62419 00:08:48.853 21:31:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:49.112 21:31:09 -- event/cpu_locks.sh@73 -- # killprocess 62419 00:08:49.112 21:31:09 -- common/autotest_common.sh@936 -- # '[' -z 62419 ']' 00:08:49.112 21:31:09 -- common/autotest_common.sh@940 -- # kill -0 62419 00:08:49.112 21:31:09 -- common/autotest_common.sh@941 -- # uname 00:08:49.112 21:31:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:49.112 21:31:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62419 00:08:49.370 21:31:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:49.370 21:31:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:49.370 killing process with pid 62419 00:08:49.370 21:31:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62419' 00:08:49.370 21:31:09 -- common/autotest_common.sh@955 -- # kill 62419 00:08:49.370 21:31:09 -- common/autotest_common.sh@960 -- # wait 62419 00:08:51.273 00:08:51.273 real 0m4.321s 00:08:51.273 user 0m4.544s 00:08:51.273 sys 0m0.717s 00:08:51.273 21:31:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:51.273 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.273 ************************************ 00:08:51.273 END TEST default_locks_via_rpc 00:08:51.273 ************************************ 00:08:51.273 21:31:11 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:51.273 21:31:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.273 21:31:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.273 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.273 ************************************ 00:08:51.273 START TEST non_locking_app_on_locked_coremask 00:08:51.273 ************************************ 00:08:51.273 21:31:11 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:08:51.273 21:31:11 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62495 00:08:51.273 21:31:11 -- event/cpu_locks.sh@81 -- # waitforlisten 62495 /var/tmp/spdk.sock 00:08:51.273 21:31:11 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:51.273 21:31:11 -- common/autotest_common.sh@829 -- # '[' -z 62495 ']' 00:08:51.273 21:31:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.273 21:31:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:51.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.273 21:31:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.273 21:31:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:51.273 21:31:11 -- common/autotest_common.sh@10 -- # set +x 00:08:51.531 [2024-12-06 21:31:11.828852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:51.531 [2024-12-06 21:31:11.829069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62495 ] 00:08:51.531 [2024-12-06 21:31:11.997228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.789 [2024-12-06 21:31:12.200985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.789 [2024-12-06 21:31:12.201247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.163 21:31:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:53.163 21:31:13 -- common/autotest_common.sh@862 -- # return 0 00:08:53.163 21:31:13 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:53.163 21:31:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62524 00:08:53.163 21:31:13 -- event/cpu_locks.sh@85 -- # waitforlisten 62524 /var/tmp/spdk2.sock 00:08:53.163 21:31:13 -- common/autotest_common.sh@829 -- # '[' -z 62524 ']' 00:08:53.163 21:31:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:53.163 21:31:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.163 21:31:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:53.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:53.163 21:31:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.163 21:31:13 -- common/autotest_common.sh@10 -- # set +x 00:08:53.163 [2024-12-06 21:31:13.571341] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:53.163 [2024-12-06 21:31:13.571521] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62524 ] 00:08:53.421 [2024-12-06 21:31:13.737011] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:53.421 [2024-12-06 21:31:13.737083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.679 [2024-12-06 21:31:14.083557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.679 [2024-12-06 21:31:14.083804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.581 21:31:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:55.581 21:31:15 -- common/autotest_common.sh@862 -- # return 0 00:08:55.581 21:31:15 -- event/cpu_locks.sh@87 -- # locks_exist 62495 00:08:55.581 21:31:15 -- event/cpu_locks.sh@22 -- # lslocks -p 62495 00:08:55.581 21:31:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.518 21:31:16 -- event/cpu_locks.sh@89 -- # killprocess 62495 00:08:56.518 21:31:16 -- common/autotest_common.sh@936 -- # '[' -z 62495 ']' 00:08:56.518 21:31:16 -- common/autotest_common.sh@940 -- # kill -0 62495 00:08:56.518 21:31:16 -- common/autotest_common.sh@941 -- # uname 00:08:56.518 21:31:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.518 21:31:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62495 00:08:56.518 21:31:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.518 21:31:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.518 21:31:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62495' 00:08:56.518 killing process with pid 62495 00:08:56.518 21:31:16 -- common/autotest_common.sh@955 -- # kill 62495 00:08:56.518 21:31:16 -- common/autotest_common.sh@960 -- # wait 62495 00:09:00.742 21:31:20 -- event/cpu_locks.sh@90 -- # killprocess 62524 00:09:00.742 21:31:20 -- common/autotest_common.sh@936 -- # '[' -z 62524 ']' 00:09:00.742 21:31:20 -- common/autotest_common.sh@940 -- # kill -0 62524 00:09:00.742 21:31:20 -- common/autotest_common.sh@941 -- # uname 00:09:00.742 21:31:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.742 21:31:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62524 00:09:00.742 21:31:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.742 21:31:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.742 21:31:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62524' 00:09:00.742 killing process with pid 62524 00:09:00.742 21:31:20 -- common/autotest_common.sh@955 -- # kill 62524 00:09:00.742 21:31:20 -- common/autotest_common.sh@960 -- # wait 62524 00:09:02.119 00:09:02.119 real 0m10.847s 00:09:02.119 user 0m11.755s 00:09:02.119 sys 0m1.414s 00:09:02.119 21:31:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.119 ************************************ 00:09:02.119 21:31:22 -- common/autotest_common.sh@10 -- # set +x 00:09:02.119 END TEST non_locking_app_on_locked_coremask 00:09:02.119 ************************************ 00:09:02.390 21:31:22 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:02.390 21:31:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:02.390 21:31:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.390 21:31:22 -- common/autotest_common.sh@10 -- # set +x 00:09:02.390 ************************************ 00:09:02.390 START TEST locking_app_on_unlocked_coremask 00:09:02.390 ************************************ 00:09:02.390 21:31:22 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:09:02.390 21:31:22 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62658 00:09:02.390 21:31:22 -- event/cpu_locks.sh@99 -- # waitforlisten 62658 /var/tmp/spdk.sock 00:09:02.390 21:31:22 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:02.390 21:31:22 -- common/autotest_common.sh@829 -- # '[' -z 62658 ']' 00:09:02.390 21:31:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.390 21:31:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.390 21:31:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.390 21:31:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.390 21:31:22 -- common/autotest_common.sh@10 -- # set +x 00:09:02.390 [2024-12-06 21:31:22.733018] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:02.390 [2024-12-06 21:31:22.733213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62658 ] 00:09:02.648 [2024-12-06 21:31:22.906586] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.648 [2024-12-06 21:31:22.906652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.648 [2024-12-06 21:31:23.070848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:02.648 [2024-12-06 21:31:23.071132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.027 21:31:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.027 21:31:24 -- common/autotest_common.sh@862 -- # return 0 00:09:04.027 21:31:24 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:04.027 21:31:24 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62687 00:09:04.027 21:31:24 -- event/cpu_locks.sh@103 -- # waitforlisten 62687 /var/tmp/spdk2.sock 00:09:04.027 21:31:24 -- common/autotest_common.sh@829 -- # '[' -z 62687 ']' 00:09:04.027 21:31:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.027 21:31:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.027 21:31:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.027 21:31:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.027 21:31:24 -- common/autotest_common.sh@10 -- # set +x 00:09:04.027 [2024-12-06 21:31:24.423307] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:04.027 [2024-12-06 21:31:24.423495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62687 ] 00:09:04.284 [2024-12-06 21:31:24.598711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.543 [2024-12-06 21:31:24.929238] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.543 [2024-12-06 21:31:24.932520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.445 21:31:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:06.445 21:31:26 -- common/autotest_common.sh@862 -- # return 0 00:09:06.445 21:31:26 -- event/cpu_locks.sh@105 -- # locks_exist 62687 00:09:06.445 21:31:26 -- event/cpu_locks.sh@22 -- # lslocks -p 62687 00:09:06.445 21:31:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:07.382 21:31:27 -- event/cpu_locks.sh@107 -- # killprocess 62658 00:09:07.382 21:31:27 -- common/autotest_common.sh@936 -- # '[' -z 62658 ']' 00:09:07.382 21:31:27 -- common/autotest_common.sh@940 -- # kill -0 62658 00:09:07.382 21:31:27 -- common/autotest_common.sh@941 -- # uname 00:09:07.382 21:31:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.382 21:31:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62658 00:09:07.382 21:31:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.382 killing process with pid 62658 00:09:07.382 21:31:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.382 21:31:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62658' 00:09:07.382 21:31:27 -- common/autotest_common.sh@955 -- # kill 62658 00:09:07.382 21:31:27 -- common/autotest_common.sh@960 -- # wait 62658 00:09:11.568 21:31:31 -- event/cpu_locks.sh@108 -- # killprocess 62687 00:09:11.568 21:31:31 -- common/autotest_common.sh@936 -- # '[' -z 62687 ']' 00:09:11.568 21:31:31 -- common/autotest_common.sh@940 -- # kill -0 62687 00:09:11.568 21:31:31 -- common/autotest_common.sh@941 -- # uname 00:09:11.568 21:31:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.568 21:31:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62687 00:09:11.568 21:31:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.568 21:31:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.568 21:31:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62687' 00:09:11.568 killing process with pid 62687 00:09:11.568 21:31:31 -- common/autotest_common.sh@955 -- # kill 62687 00:09:11.568 21:31:31 -- common/autotest_common.sh@960 -- # wait 62687 00:09:13.469 00:09:13.469 real 0m11.281s 00:09:13.469 user 0m12.146s 00:09:13.469 sys 0m1.388s 00:09:13.469 21:31:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:13.469 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:09:13.469 ************************************ 00:09:13.469 END TEST locking_app_on_unlocked_coremask 00:09:13.469 ************************************ 00:09:13.729 21:31:33 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:13.729 21:31:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:13.729 21:31:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.729 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 ************************************ 00:09:13.729 START TEST locking_app_on_locked_coremask 00:09:13.729 ************************************ 00:09:13.729 21:31:33 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:09:13.729 21:31:33 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62826 00:09:13.729 21:31:33 -- event/cpu_locks.sh@116 -- # waitforlisten 62826 /var/tmp/spdk.sock 00:09:13.729 21:31:33 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:13.729 21:31:33 -- common/autotest_common.sh@829 -- # '[' -z 62826 ']' 00:09:13.729 21:31:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.729 21:31:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.729 21:31:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.729 21:31:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.729 21:31:33 -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 [2024-12-06 21:31:34.087731] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:13.729 [2024-12-06 21:31:34.087894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62826 ] 00:09:13.988 [2024-12-06 21:31:34.260248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.988 [2024-12-06 21:31:34.453164] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:13.988 [2024-12-06 21:31:34.453472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.364 21:31:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.364 21:31:35 -- common/autotest_common.sh@862 -- # return 0 00:09:15.364 21:31:35 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62850 00:09:15.364 21:31:35 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:15.364 21:31:35 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62850 /var/tmp/spdk2.sock 00:09:15.364 21:31:35 -- common/autotest_common.sh@650 -- # local es=0 00:09:15.364 21:31:35 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62850 /var/tmp/spdk2.sock 00:09:15.364 21:31:35 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:15.364 21:31:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.364 21:31:35 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:15.364 21:31:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:15.364 21:31:35 -- common/autotest_common.sh@653 -- # waitforlisten 62850 /var/tmp/spdk2.sock 00:09:15.364 21:31:35 -- common/autotest_common.sh@829 -- # '[' -z 62850 ']' 00:09:15.364 21:31:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:15.364 21:31:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:15.364 21:31:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:15.364 21:31:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.364 21:31:35 -- common/autotest_common.sh@10 -- # set +x 00:09:15.364 [2024-12-06 21:31:35.774632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:15.364 [2024-12-06 21:31:35.774797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62850 ] 00:09:15.623 [2024-12-06 21:31:35.944863] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62826 has claimed it. 00:09:15.623 [2024-12-06 21:31:35.944953] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:16.190 ERROR: process (pid: 62850) is no longer running 00:09:16.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62850) - No such process 00:09:16.190 21:31:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:16.190 21:31:36 -- common/autotest_common.sh@862 -- # return 1 00:09:16.190 21:31:36 -- common/autotest_common.sh@653 -- # es=1 00:09:16.190 21:31:36 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:16.190 21:31:36 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:16.190 21:31:36 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:16.190 21:31:36 -- event/cpu_locks.sh@122 -- # locks_exist 62826 00:09:16.190 21:31:36 -- event/cpu_locks.sh@22 -- # lslocks -p 62826 00:09:16.190 21:31:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:16.449 21:31:36 -- event/cpu_locks.sh@124 -- # killprocess 62826 00:09:16.449 21:31:36 -- common/autotest_common.sh@936 -- # '[' -z 62826 ']' 00:09:16.449 21:31:36 -- common/autotest_common.sh@940 -- # kill -0 62826 00:09:16.449 21:31:36 -- common/autotest_common.sh@941 -- # uname 00:09:16.449 21:31:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:16.449 21:31:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62826 00:09:16.707 21:31:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:16.707 21:31:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:16.707 killing process with pid 62826 00:09:16.707 21:31:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62826' 00:09:16.707 21:31:36 -- common/autotest_common.sh@955 -- # kill 62826 00:09:16.707 21:31:36 -- common/autotest_common.sh@960 -- # wait 62826 00:09:18.611 00:09:18.611 real 0m5.036s 00:09:18.611 user 0m5.511s 00:09:18.611 sys 0m0.830s 00:09:18.611 21:31:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:18.611 ************************************ 00:09:18.611 END TEST locking_app_on_locked_coremask 00:09:18.611 ************************************ 00:09:18.611 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 21:31:39 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:18.611 21:31:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:18.611 21:31:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:18.611 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:09:18.611 ************************************ 00:09:18.611 START TEST locking_overlapped_coremask 00:09:18.611 ************************************ 00:09:18.611 21:31:39 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:09:18.611 21:31:39 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62914 00:09:18.611 21:31:39 -- event/cpu_locks.sh@133 -- # waitforlisten 62914 /var/tmp/spdk.sock 00:09:18.611 21:31:39 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:18.611 21:31:39 -- common/autotest_common.sh@829 -- # '[' -z 62914 ']' 00:09:18.611 21:31:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.611 21:31:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.611 21:31:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.611 21:31:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.611 21:31:39 -- common/autotest_common.sh@10 -- # set +x 00:09:18.870 [2024-12-06 21:31:39.154889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:18.870 [2024-12-06 21:31:39.155054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62914 ] 00:09:18.870 [2024-12-06 21:31:39.325670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:19.129 [2024-12-06 21:31:39.523040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.129 [2024-12-06 21:31:39.523420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.129 [2024-12-06 21:31:39.524138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.129 [2024-12-06 21:31:39.524164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.505 21:31:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.505 21:31:40 -- common/autotest_common.sh@862 -- # return 0 00:09:20.505 21:31:40 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:20.505 21:31:40 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62945 00:09:20.505 21:31:40 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62945 /var/tmp/spdk2.sock 00:09:20.505 21:31:40 -- common/autotest_common.sh@650 -- # local es=0 00:09:20.505 21:31:40 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62945 /var/tmp/spdk2.sock 00:09:20.505 21:31:40 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:20.505 21:31:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.505 21:31:40 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:20.505 21:31:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:20.505 21:31:40 -- common/autotest_common.sh@653 -- # waitforlisten 62945 /var/tmp/spdk2.sock 00:09:20.505 21:31:40 -- common/autotest_common.sh@829 -- # '[' -z 62945 ']' 00:09:20.505 21:31:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.505 21:31:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.505 21:31:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.505 21:31:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.505 21:31:40 -- common/autotest_common.sh@10 -- # set +x 00:09:20.505 [2024-12-06 21:31:40.890329] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:20.505 [2024-12-06 21:31:40.890471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62945 ] 00:09:20.812 [2024-12-06 21:31:41.063328] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62914 has claimed it. 00:09:20.812 [2024-12-06 21:31:41.063423] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:21.069 ERROR: process (pid: 62945) is no longer running 00:09:21.069 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (62945) - No such process 00:09:21.069 21:31:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:21.069 21:31:41 -- common/autotest_common.sh@862 -- # return 1 00:09:21.069 21:31:41 -- common/autotest_common.sh@653 -- # es=1 00:09:21.069 21:31:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.069 21:31:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.069 21:31:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.069 21:31:41 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:21.069 21:31:41 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:21.069 21:31:41 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:21.069 21:31:41 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:21.069 21:31:41 -- event/cpu_locks.sh@141 -- # killprocess 62914 00:09:21.069 21:31:41 -- common/autotest_common.sh@936 -- # '[' -z 62914 ']' 00:09:21.069 21:31:41 -- common/autotest_common.sh@940 -- # kill -0 62914 00:09:21.069 21:31:41 -- common/autotest_common.sh@941 -- # uname 00:09:21.069 21:31:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:21.326 21:31:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62914 00:09:21.326 21:31:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:21.326 21:31:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:21.326 killing process with pid 62914 00:09:21.326 21:31:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62914' 00:09:21.326 21:31:41 -- common/autotest_common.sh@955 -- # kill 62914 00:09:21.326 21:31:41 -- common/autotest_common.sh@960 -- # wait 62914 00:09:23.256 00:09:23.256 real 0m4.667s 00:09:23.256 user 0m12.660s 00:09:23.256 sys 0m0.593s 00:09:23.256 21:31:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.256 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:09:23.256 ************************************ 00:09:23.256 END TEST locking_overlapped_coremask 00:09:23.256 ************************************ 00:09:23.516 21:31:43 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:23.516 21:31:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:23.516 21:31:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.516 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:09:23.516 ************************************ 00:09:23.516 START TEST locking_overlapped_coremask_via_rpc 00:09:23.516 ************************************ 00:09:23.516 21:31:43 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:09:23.516 21:31:43 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63009 00:09:23.516 21:31:43 -- event/cpu_locks.sh@149 -- # waitforlisten 63009 /var/tmp/spdk.sock 00:09:23.516 21:31:43 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:23.516 21:31:43 -- common/autotest_common.sh@829 -- # '[' -z 63009 ']' 00:09:23.516 21:31:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.516 21:31:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.516 21:31:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.516 21:31:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.516 21:31:43 -- common/autotest_common.sh@10 -- # set +x 00:09:23.516 [2024-12-06 21:31:43.877753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.516 [2024-12-06 21:31:43.877919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63009 ] 00:09:23.775 [2024-12-06 21:31:44.048245] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:23.775 [2024-12-06 21:31:44.048300] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.775 [2024-12-06 21:31:44.230702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:23.775 [2024-12-06 21:31:44.231098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.775 [2024-12-06 21:31:44.231909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.775 [2024-12-06 21:31:44.231937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:25.151 21:31:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.151 21:31:45 -- common/autotest_common.sh@862 -- # return 0 00:09:25.151 21:31:45 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63029 00:09:25.151 21:31:45 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:25.151 21:31:45 -- event/cpu_locks.sh@153 -- # waitforlisten 63029 /var/tmp/spdk2.sock 00:09:25.151 21:31:45 -- common/autotest_common.sh@829 -- # '[' -z 63029 ']' 00:09:25.151 21:31:45 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:25.151 21:31:45 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.151 21:31:45 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:25.151 21:31:45 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.151 21:31:45 -- common/autotest_common.sh@10 -- # set +x 00:09:25.151 [2024-12-06 21:31:45.604057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:25.151 [2024-12-06 21:31:45.604186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63029 ] 00:09:25.409 [2024-12-06 21:31:45.776889] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:25.409 [2024-12-06 21:31:45.776949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.668 [2024-12-06 21:31:46.162167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:25.668 [2024-12-06 21:31:46.162694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.668 [2024-12-06 21:31:46.163041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:25.668 [2024-12-06 21:31:46.163070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.569 21:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:27.569 21:31:48 -- common/autotest_common.sh@862 -- # return 0 00:09:27.569 21:31:48 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:27.569 21:31:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.569 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:09:27.569 21:31:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.569 21:31:48 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.569 21:31:48 -- common/autotest_common.sh@650 -- # local es=0 00:09:27.569 21:31:48 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.569 21:31:48 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:27.828 21:31:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.828 21:31:48 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:27.828 21:31:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:27.828 21:31:48 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:27.828 21:31:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.828 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:09:27.828 [2024-12-06 21:31:48.071677] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63009 has claimed it. 00:09:27.828 request: 00:09:27.828 { 00:09:27.828 "method": "framework_enable_cpumask_locks", 00:09:27.828 "req_id": 1 00:09:27.828 } 00:09:27.828 Got JSON-RPC error response 00:09:27.828 response: 00:09:27.828 { 00:09:27.828 "code": -32603, 00:09:27.828 "message": "Failed to claim CPU core: 2" 00:09:27.828 } 00:09:27.828 21:31:48 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:27.828 21:31:48 -- common/autotest_common.sh@653 -- # es=1 00:09:27.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.828 21:31:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:27.828 21:31:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:27.828 21:31:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:27.828 21:31:48 -- event/cpu_locks.sh@158 -- # waitforlisten 63009 /var/tmp/spdk.sock 00:09:27.828 21:31:48 -- common/autotest_common.sh@829 -- # '[' -z 63009 ']' 00:09:27.828 21:31:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.828 21:31:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.828 21:31:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.828 21:31:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.828 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:09:28.087 21:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.087 21:31:48 -- common/autotest_common.sh@862 -- # return 0 00:09:28.087 21:31:48 -- event/cpu_locks.sh@159 -- # waitforlisten 63029 /var/tmp/spdk2.sock 00:09:28.087 21:31:48 -- common/autotest_common.sh@829 -- # '[' -z 63029 ']' 00:09:28.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.087 21:31:48 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.087 21:31:48 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:28.087 21:31:48 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.087 21:31:48 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:28.087 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:09:28.087 ************************************ 00:09:28.087 END TEST locking_overlapped_coremask_via_rpc 00:09:28.087 ************************************ 00:09:28.087 21:31:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.087 21:31:48 -- common/autotest_common.sh@862 -- # return 0 00:09:28.087 21:31:48 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:28.087 21:31:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:28.087 21:31:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:28.087 21:31:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:28.087 00:09:28.087 real 0m4.757s 00:09:28.087 user 0m1.938s 00:09:28.087 sys 0m0.265s 00:09:28.087 21:31:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:28.087 21:31:48 -- common/autotest_common.sh@10 -- # set +x 00:09:28.346 21:31:48 -- event/cpu_locks.sh@174 -- # cleanup 00:09:28.346 21:31:48 -- event/cpu_locks.sh@15 -- # [[ -z 63009 ]] 00:09:28.346 21:31:48 -- event/cpu_locks.sh@15 -- # killprocess 63009 00:09:28.346 21:31:48 -- common/autotest_common.sh@936 -- # '[' -z 63009 ']' 00:09:28.346 21:31:48 -- common/autotest_common.sh@940 -- # kill -0 63009 00:09:28.346 21:31:48 -- common/autotest_common.sh@941 -- # uname 00:09:28.346 21:31:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:28.346 21:31:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63009 00:09:28.346 killing process with pid 63009 00:09:28.346 21:31:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:28.346 21:31:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:28.346 21:31:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63009' 00:09:28.346 21:31:48 -- common/autotest_common.sh@955 -- # kill 63009 00:09:28.346 21:31:48 -- common/autotest_common.sh@960 -- # wait 63009 00:09:30.247 21:31:50 -- event/cpu_locks.sh@16 -- # [[ -z 63029 ]] 00:09:30.247 21:31:50 -- event/cpu_locks.sh@16 -- # killprocess 63029 00:09:30.247 21:31:50 -- common/autotest_common.sh@936 -- # '[' -z 63029 ']' 00:09:30.247 21:31:50 -- common/autotest_common.sh@940 -- # kill -0 63029 00:09:30.247 21:31:50 -- common/autotest_common.sh@941 -- # uname 00:09:30.506 21:31:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.506 21:31:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63029 00:09:30.506 killing process with pid 63029 00:09:30.506 21:31:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:09:30.506 21:31:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:09:30.506 21:31:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63029' 00:09:30.506 21:31:50 -- common/autotest_common.sh@955 -- # kill 63029 00:09:30.506 21:31:50 -- common/autotest_common.sh@960 -- # wait 63029 00:09:32.407 21:31:52 -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.408 21:31:52 -- event/cpu_locks.sh@1 -- # cleanup 00:09:32.408 21:31:52 -- event/cpu_locks.sh@15 -- # [[ -z 63009 ]] 00:09:32.408 21:31:52 -- event/cpu_locks.sh@15 -- # killprocess 63009 00:09:32.408 21:31:52 -- common/autotest_common.sh@936 -- # '[' -z 63009 ']' 00:09:32.408 21:31:52 -- common/autotest_common.sh@940 -- # kill -0 63009 00:09:32.408 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63009) - No such process 00:09:32.408 Process with pid 63009 is not found 00:09:32.408 21:31:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63009 is not found' 00:09:32.408 21:31:52 -- event/cpu_locks.sh@16 -- # [[ -z 63029 ]] 00:09:32.408 21:31:52 -- event/cpu_locks.sh@16 -- # killprocess 63029 00:09:32.408 21:31:52 -- common/autotest_common.sh@936 -- # '[' -z 63029 ']' 00:09:32.408 21:31:52 -- common/autotest_common.sh@940 -- # kill -0 63029 00:09:32.408 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (63029) - No such process 00:09:32.408 Process with pid 63029 is not found 00:09:32.408 21:31:52 -- common/autotest_common.sh@963 -- # echo 'Process with pid 63029 is not found' 00:09:32.408 21:31:52 -- event/cpu_locks.sh@18 -- # rm -f 00:09:32.408 ************************************ 00:09:32.408 END TEST cpu_locks 00:09:32.408 ************************************ 00:09:32.408 00:09:32.408 real 0m49.530s 00:09:32.408 user 1m26.005s 00:09:32.408 sys 0m6.923s 00:09:32.408 21:31:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.408 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.408 ************************************ 00:09:32.408 END TEST event 00:09:32.408 ************************************ 00:09:32.408 00:09:32.408 real 1m20.701s 00:09:32.408 user 2m25.086s 00:09:32.408 sys 0m10.835s 00:09:32.408 21:31:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:32.408 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.408 21:31:52 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:32.408 21:31:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:32.408 21:31:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.408 21:31:52 -- common/autotest_common.sh@10 -- # set +x 00:09:32.408 ************************************ 00:09:32.408 START TEST thread 00:09:32.408 ************************************ 00:09:32.408 21:31:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:32.666 * Looking for test storage... 00:09:32.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:32.667 21:31:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:32.667 21:31:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:32.667 21:31:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:32.667 21:31:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:32.667 21:31:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:32.667 21:31:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:32.667 21:31:53 -- scripts/common.sh@335 -- # IFS=.-: 00:09:32.667 21:31:53 -- scripts/common.sh@335 -- # read -ra ver1 00:09:32.667 21:31:53 -- scripts/common.sh@336 -- # IFS=.-: 00:09:32.667 21:31:53 -- scripts/common.sh@336 -- # read -ra ver2 00:09:32.667 21:31:53 -- scripts/common.sh@337 -- # local 'op=<' 00:09:32.667 21:31:53 -- scripts/common.sh@339 -- # ver1_l=2 00:09:32.667 21:31:53 -- scripts/common.sh@340 -- # ver2_l=1 00:09:32.667 21:31:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:32.667 21:31:53 -- scripts/common.sh@343 -- # case "$op" in 00:09:32.667 21:31:53 -- scripts/common.sh@344 -- # : 1 00:09:32.667 21:31:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:32.667 21:31:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:32.667 21:31:53 -- scripts/common.sh@364 -- # decimal 1 00:09:32.667 21:31:53 -- scripts/common.sh@352 -- # local d=1 00:09:32.667 21:31:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:32.667 21:31:53 -- scripts/common.sh@354 -- # echo 1 00:09:32.667 21:31:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:32.667 21:31:53 -- scripts/common.sh@365 -- # decimal 2 00:09:32.667 21:31:53 -- scripts/common.sh@352 -- # local d=2 00:09:32.667 21:31:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:32.667 21:31:53 -- scripts/common.sh@354 -- # echo 2 00:09:32.667 21:31:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:32.667 21:31:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:32.667 21:31:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:32.667 21:31:53 -- scripts/common.sh@367 -- # return 0 00:09:32.667 21:31:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:32.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.667 --rc genhtml_branch_coverage=1 00:09:32.667 --rc genhtml_function_coverage=1 00:09:32.667 --rc genhtml_legend=1 00:09:32.667 --rc geninfo_all_blocks=1 00:09:32.667 --rc geninfo_unexecuted_blocks=1 00:09:32.667 00:09:32.667 ' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:32.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.667 --rc genhtml_branch_coverage=1 00:09:32.667 --rc genhtml_function_coverage=1 00:09:32.667 --rc genhtml_legend=1 00:09:32.667 --rc geninfo_all_blocks=1 00:09:32.667 --rc geninfo_unexecuted_blocks=1 00:09:32.667 00:09:32.667 ' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:32.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.667 --rc genhtml_branch_coverage=1 00:09:32.667 --rc genhtml_function_coverage=1 00:09:32.667 --rc genhtml_legend=1 00:09:32.667 --rc geninfo_all_blocks=1 00:09:32.667 --rc geninfo_unexecuted_blocks=1 00:09:32.667 00:09:32.667 ' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:32.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:32.667 --rc genhtml_branch_coverage=1 00:09:32.667 --rc genhtml_function_coverage=1 00:09:32.667 --rc genhtml_legend=1 00:09:32.667 --rc geninfo_all_blocks=1 00:09:32.667 --rc geninfo_unexecuted_blocks=1 00:09:32.667 00:09:32.667 ' 00:09:32.667 21:31:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.667 21:31:53 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:32.667 21:31:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:32.667 21:31:53 -- common/autotest_common.sh@10 -- # set +x 00:09:32.667 ************************************ 00:09:32.667 START TEST thread_poller_perf 00:09:32.667 ************************************ 00:09:32.667 21:31:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:32.667 [2024-12-06 21:31:53.079076] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:32.667 [2024-12-06 21:31:53.079250] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63220 ] 00:09:32.925 [2024-12-06 21:31:53.249066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.184 [2024-12-06 21:31:53.465654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.184 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:34.559 [2024-12-06T21:31:55.056Z] ====================================== 00:09:34.559 [2024-12-06T21:31:55.056Z] busy:2210958782 (cyc) 00:09:34.559 [2024-12-06T21:31:55.056Z] total_run_count: 327000 00:09:34.559 [2024-12-06T21:31:55.056Z] tsc_hz: 2200000000 (cyc) 00:09:34.559 [2024-12-06T21:31:55.056Z] ====================================== 00:09:34.559 [2024-12-06T21:31:55.056Z] poller_cost: 6761 (cyc), 3073 (nsec) 00:09:34.559 ************************************ 00:09:34.559 END TEST thread_poller_perf 00:09:34.559 ************************************ 00:09:34.559 00:09:34.559 real 0m1.788s 00:09:34.559 user 0m1.582s 00:09:34.559 sys 0m0.104s 00:09:34.559 21:31:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.559 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:09:34.559 21:31:54 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.559 21:31:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:09:34.559 21:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.559 21:31:54 -- common/autotest_common.sh@10 -- # set +x 00:09:34.559 ************************************ 00:09:34.559 START TEST thread_poller_perf 00:09:34.559 ************************************ 00:09:34.559 21:31:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:34.559 [2024-12-06 21:31:54.915838] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.559 [2024-12-06 21:31:54.915957] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63262 ] 00:09:34.817 [2024-12-06 21:31:55.070806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.818 [2024-12-06 21:31:55.235937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.818 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:36.192 [2024-12-06T21:31:56.689Z] ====================================== 00:09:36.192 [2024-12-06T21:31:56.689Z] busy:2204678140 (cyc) 00:09:36.192 [2024-12-06T21:31:56.689Z] total_run_count: 4296000 00:09:36.192 [2024-12-06T21:31:56.689Z] tsc_hz: 2200000000 (cyc) 00:09:36.192 [2024-12-06T21:31:56.689Z] ====================================== 00:09:36.192 [2024-12-06T21:31:56.689Z] poller_cost: 513 (cyc), 233 (nsec) 00:09:36.192 00:09:36.192 real 0m1.698s 00:09:36.192 user 0m1.515s 00:09:36.192 sys 0m0.083s 00:09:36.192 21:31:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:36.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:09:36.192 ************************************ 00:09:36.192 END TEST thread_poller_perf 00:09:36.192 ************************************ 00:09:36.192 21:31:56 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:36.192 21:31:56 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:36.192 21:31:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:36.192 21:31:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.192 21:31:56 -- common/autotest_common.sh@10 -- # set +x 00:09:36.192 ************************************ 00:09:36.192 START TEST thread_spdk_lock 00:09:36.192 ************************************ 00:09:36.192 21:31:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:36.192 [2024-12-06 21:31:56.669886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:36.192 [2024-12-06 21:31:56.670005] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63298 ] 00:09:36.450 [2024-12-06 21:31:56.824852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.708 [2024-12-06 21:31:56.995753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.708 [2024-12-06 21:31:56.995764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.275 [2024-12-06 21:31:57.526208] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 957:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.275 [2024-12-06 21:31:57.526354] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:37.275 [2024-12-06 21:31:57.526378] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3019:sspin_stacks_print: *ERROR*: spinlock 0x6114371b75c0 00:09:37.275 [2024-12-06 21:31:57.534557] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.275 [2024-12-06 21:31:57.534694] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1018:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.275 [2024-12-06 21:31:57.534730] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 852:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:37.533 Starting test contend 00:09:37.533 Worker Delay Wait us Hold us Total us 00:09:37.533 0 3 124032 198170 322203 00:09:37.533 1 5 57408 300672 358080 00:09:37.533 PASS test contend 00:09:37.533 Starting test hold_by_poller 00:09:37.533 PASS test hold_by_poller 00:09:37.533 Starting test hold_by_message 00:09:37.533 PASS test hold_by_message 00:09:37.533 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:37.533 100014 assertions passed 00:09:37.533 0 assertions failed 00:09:37.533 00:09:37.533 real 0m1.279s 00:09:37.533 user 0m1.619s 00:09:37.533 sys 0m0.100s 00:09:37.533 21:31:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.533 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.533 ************************************ 00:09:37.533 END TEST thread_spdk_lock 00:09:37.533 ************************************ 00:09:37.533 00:09:37.533 real 0m5.112s 00:09:37.533 user 0m4.869s 00:09:37.533 sys 0m0.473s 00:09:37.533 21:31:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:37.533 21:31:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.533 ************************************ 00:09:37.533 END TEST thread 00:09:37.533 ************************************ 00:09:37.533 21:31:58 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:37.533 21:31:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:37.533 21:31:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:37.533 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.533 ************************************ 00:09:37.533 START TEST accel 00:09:37.533 ************************************ 00:09:37.533 21:31:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:37.792 * Looking for test storage... 00:09:37.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:37.792 21:31:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:37.792 21:31:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:37.792 21:31:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:37.792 21:31:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:37.792 21:31:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:37.792 21:31:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:37.792 21:31:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:37.792 21:31:58 -- scripts/common.sh@335 -- # IFS=.-: 00:09:37.792 21:31:58 -- scripts/common.sh@335 -- # read -ra ver1 00:09:37.792 21:31:58 -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.792 21:31:58 -- scripts/common.sh@336 -- # read -ra ver2 00:09:37.792 21:31:58 -- scripts/common.sh@337 -- # local 'op=<' 00:09:37.792 21:31:58 -- scripts/common.sh@339 -- # ver1_l=2 00:09:37.792 21:31:58 -- scripts/common.sh@340 -- # ver2_l=1 00:09:37.792 21:31:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:37.792 21:31:58 -- scripts/common.sh@343 -- # case "$op" in 00:09:37.792 21:31:58 -- scripts/common.sh@344 -- # : 1 00:09:37.792 21:31:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:37.792 21:31:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.792 21:31:58 -- scripts/common.sh@364 -- # decimal 1 00:09:37.792 21:31:58 -- scripts/common.sh@352 -- # local d=1 00:09:37.792 21:31:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.792 21:31:58 -- scripts/common.sh@354 -- # echo 1 00:09:37.792 21:31:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:37.792 21:31:58 -- scripts/common.sh@365 -- # decimal 2 00:09:37.792 21:31:58 -- scripts/common.sh@352 -- # local d=2 00:09:37.792 21:31:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.792 21:31:58 -- scripts/common.sh@354 -- # echo 2 00:09:37.792 21:31:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:37.792 21:31:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:37.792 21:31:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:37.792 21:31:58 -- scripts/common.sh@367 -- # return 0 00:09:37.792 21:31:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.792 21:31:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.792 --rc genhtml_branch_coverage=1 00:09:37.792 --rc genhtml_function_coverage=1 00:09:37.792 --rc genhtml_legend=1 00:09:37.792 --rc geninfo_all_blocks=1 00:09:37.792 --rc geninfo_unexecuted_blocks=1 00:09:37.792 00:09:37.792 ' 00:09:37.792 21:31:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.792 --rc genhtml_branch_coverage=1 00:09:37.792 --rc genhtml_function_coverage=1 00:09:37.792 --rc genhtml_legend=1 00:09:37.792 --rc geninfo_all_blocks=1 00:09:37.792 --rc geninfo_unexecuted_blocks=1 00:09:37.792 00:09:37.792 ' 00:09:37.792 21:31:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.792 --rc genhtml_branch_coverage=1 00:09:37.792 --rc genhtml_function_coverage=1 00:09:37.792 --rc genhtml_legend=1 00:09:37.792 --rc geninfo_all_blocks=1 00:09:37.792 --rc geninfo_unexecuted_blocks=1 00:09:37.792 00:09:37.792 ' 00:09:37.792 21:31:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:37.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.792 --rc genhtml_branch_coverage=1 00:09:37.792 --rc genhtml_function_coverage=1 00:09:37.792 --rc genhtml_legend=1 00:09:37.792 --rc geninfo_all_blocks=1 00:09:37.792 --rc geninfo_unexecuted_blocks=1 00:09:37.792 00:09:37.792 ' 00:09:37.792 21:31:58 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:37.792 21:31:58 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:37.792 21:31:58 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:37.792 21:31:58 -- accel/accel.sh@59 -- # spdk_tgt_pid=63382 00:09:37.792 21:31:58 -- accel/accel.sh@60 -- # waitforlisten 63382 00:09:37.792 21:31:58 -- common/autotest_common.sh@829 -- # '[' -z 63382 ']' 00:09:37.792 21:31:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.792 21:31:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:37.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.792 21:31:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.792 21:31:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:37.792 21:31:58 -- common/autotest_common.sh@10 -- # set +x 00:09:37.792 21:31:58 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:37.792 21:31:58 -- accel/accel.sh@58 -- # build_accel_config 00:09:37.792 21:31:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:37.792 21:31:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:37.792 21:31:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:37.792 21:31:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:37.792 21:31:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:37.792 21:31:58 -- accel/accel.sh@41 -- # local IFS=, 00:09:37.792 21:31:58 -- accel/accel.sh@42 -- # jq -r . 00:09:37.792 [2024-12-06 21:31:58.281893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:37.792 [2024-12-06 21:31:58.282074] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63382 ] 00:09:38.051 [2024-12-06 21:31:58.452169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.309 [2024-12-06 21:31:58.624265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:38.309 [2024-12-06 21:31:58.624582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.684 21:31:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:39.684 21:31:59 -- common/autotest_common.sh@862 -- # return 0 00:09:39.684 21:31:59 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:39.684 21:31:59 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:39.684 21:31:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.684 21:31:59 -- common/autotest_common.sh@10 -- # set +x 00:09:39.684 21:31:59 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:39.684 21:31:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # IFS== 00:09:39.684 21:31:59 -- accel/accel.sh@64 -- # read -r opc module 00:09:39.684 21:31:59 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:39.684 21:31:59 -- accel/accel.sh@67 -- # killprocess 63382 00:09:39.684 21:31:59 -- common/autotest_common.sh@936 -- # '[' -z 63382 ']' 00:09:39.684 21:31:59 -- common/autotest_common.sh@940 -- # kill -0 63382 00:09:39.684 21:31:59 -- common/autotest_common.sh@941 -- # uname 00:09:39.684 21:31:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:39.684 21:31:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63382 00:09:39.684 21:31:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:39.684 21:31:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:39.684 killing process with pid 63382 00:09:39.684 21:31:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63382' 00:09:39.684 21:31:59 -- common/autotest_common.sh@955 -- # kill 63382 00:09:39.684 21:31:59 -- common/autotest_common.sh@960 -- # wait 63382 00:09:41.587 21:32:01 -- accel/accel.sh@68 -- # trap - ERR 00:09:41.587 21:32:01 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:41.587 21:32:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:41.587 21:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.587 21:32:01 -- common/autotest_common.sh@10 -- # set +x 00:09:41.587 21:32:01 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:09:41.587 21:32:01 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.587 21:32:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:41.587 21:32:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.587 21:32:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.587 21:32:01 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.587 21:32:01 -- accel/accel.sh@42 -- # jq -r . 00:09:41.587 21:32:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:41.587 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:09:41.587 21:32:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:41.587 21:32:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:41.587 21:32:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:41.587 21:32:02 -- common/autotest_common.sh@10 -- # set +x 00:09:41.587 ************************************ 00:09:41.587 START TEST accel_missing_filename 00:09:41.587 ************************************ 00:09:41.587 21:32:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:09:41.587 21:32:02 -- common/autotest_common.sh@650 -- # local es=0 00:09:41.587 21:32:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:41.587 21:32:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:41.587 21:32:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.587 21:32:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:41.587 21:32:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:41.587 21:32:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:09:41.587 21:32:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:41.587 21:32:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:41.587 21:32:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:41.587 21:32:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:41.587 21:32:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:41.587 21:32:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:41.587 21:32:02 -- accel/accel.sh@42 -- # jq -r . 00:09:41.845 [2024-12-06 21:32:02.098688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:41.845 [2024-12-06 21:32:02.098895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:09:41.845 [2024-12-06 21:32:02.270326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.121 [2024-12-06 21:32:02.452344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.396 [2024-12-06 21:32:02.616025] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:42.669 [2024-12-06 21:32:03.044267] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:42.928 A filename is required. 00:09:43.187 21:32:03 -- common/autotest_common.sh@653 -- # es=234 00:09:43.187 21:32:03 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:43.187 21:32:03 -- common/autotest_common.sh@662 -- # es=106 00:09:43.187 21:32:03 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:43.187 21:32:03 -- common/autotest_common.sh@670 -- # es=1 00:09:43.187 21:32:03 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:43.187 00:09:43.187 real 0m1.378s 00:09:43.187 user 0m1.114s 00:09:43.187 sys 0m0.172s 00:09:43.187 21:32:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:43.187 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:09:43.187 ************************************ 00:09:43.187 END TEST accel_missing_filename 00:09:43.187 ************************************ 00:09:43.187 21:32:03 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.187 21:32:03 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:43.187 21:32:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.187 21:32:03 -- common/autotest_common.sh@10 -- # set +x 00:09:43.187 ************************************ 00:09:43.187 START TEST accel_compress_verify 00:09:43.187 ************************************ 00:09:43.187 21:32:03 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.187 21:32:03 -- common/autotest_common.sh@650 -- # local es=0 00:09:43.187 21:32:03 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.187 21:32:03 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:43.187 21:32:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.187 21:32:03 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:43.187 21:32:03 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:43.187 21:32:03 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.187 21:32:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:43.187 21:32:03 -- accel/accel.sh@12 -- # build_accel_config 00:09:43.187 21:32:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:43.187 21:32:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:43.187 21:32:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:43.187 21:32:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:43.187 21:32:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:43.187 21:32:03 -- accel/accel.sh@41 -- # local IFS=, 00:09:43.187 21:32:03 -- accel/accel.sh@42 -- # jq -r . 00:09:43.187 [2024-12-06 21:32:03.523140] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:43.187 [2024-12-06 21:32:03.523318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63496 ] 00:09:43.446 [2024-12-06 21:32:03.694998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.704 [2024-12-06 21:32:03.956522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.704 [2024-12-06 21:32:04.127543] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:44.272 [2024-12-06 21:32:04.548478] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:44.532 00:09:44.532 Compression does not support the verify option, aborting. 00:09:44.532 21:32:04 -- common/autotest_common.sh@653 -- # es=161 00:09:44.532 21:32:04 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.532 21:32:04 -- common/autotest_common.sh@662 -- # es=33 00:09:44.532 21:32:04 -- common/autotest_common.sh@663 -- # case "$es" in 00:09:44.532 21:32:04 -- common/autotest_common.sh@670 -- # es=1 00:09:44.532 21:32:04 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.532 00:09:44.532 real 0m1.413s 00:09:44.532 user 0m1.157s 00:09:44.532 sys 0m0.164s 00:09:44.532 21:32:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.532 ************************************ 00:09:44.532 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:09:44.532 END TEST accel_compress_verify 00:09:44.532 ************************************ 00:09:44.532 21:32:04 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:44.532 21:32:04 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:44.532 21:32:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.532 21:32:04 -- common/autotest_common.sh@10 -- # set +x 00:09:44.532 ************************************ 00:09:44.532 START TEST accel_wrong_workload 00:09:44.532 ************************************ 00:09:44.532 21:32:04 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:09:44.532 21:32:04 -- common/autotest_common.sh@650 -- # local es=0 00:09:44.532 21:32:04 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:44.532 21:32:04 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:44.532 21:32:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.532 21:32:04 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:44.532 21:32:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.532 21:32:04 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:09:44.532 21:32:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:44.532 21:32:04 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.532 21:32:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.532 21:32:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.532 21:32:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.532 21:32:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.532 21:32:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.532 21:32:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.532 21:32:04 -- accel/accel.sh@42 -- # jq -r . 00:09:44.532 Unsupported workload type: foobar 00:09:44.532 [2024-12-06 21:32:04.985582] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:44.532 accel_perf options: 00:09:44.532 [-h help message] 00:09:44.532 [-q queue depth per core] 00:09:44.532 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:44.532 [-T number of threads per core 00:09:44.532 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:44.532 [-t time in seconds] 00:09:44.532 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:44.532 [ dif_verify, , dif_generate, dif_generate_copy 00:09:44.532 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:44.532 [-l for compress/decompress workloads, name of uncompressed input file 00:09:44.532 [-S for crc32c workload, use this seed value (default 0) 00:09:44.532 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:44.532 [-f for fill workload, use this BYTE value (default 255) 00:09:44.532 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:44.532 [-y verify result if this switch is on] 00:09:44.532 [-a tasks to allocate per core (default: same value as -q)] 00:09:44.532 Can be used to spread operations across a wider range of memory. 00:09:44.532 21:32:05 -- common/autotest_common.sh@653 -- # es=1 00:09:44.532 21:32:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.532 21:32:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:44.532 21:32:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.532 00:09:44.532 real 0m0.063s 00:09:44.532 user 0m0.038s 00:09:44.532 sys 0m0.034s 00:09:44.532 21:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.532 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.532 ************************************ 00:09:44.532 END TEST accel_wrong_workload 00:09:44.532 ************************************ 00:09:44.800 21:32:05 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:44.800 21:32:05 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:44.800 21:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.800 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.800 ************************************ 00:09:44.800 START TEST accel_negative_buffers 00:09:44.800 ************************************ 00:09:44.800 21:32:05 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:44.800 21:32:05 -- common/autotest_common.sh@650 -- # local es=0 00:09:44.800 21:32:05 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:44.800 21:32:05 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:09:44.800 21:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.800 21:32:05 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:09:44.800 21:32:05 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:44.800 21:32:05 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:09:44.801 21:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:44.801 21:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.801 21:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.801 21:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.801 21:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.801 21:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.801 21:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.801 21:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.801 21:32:05 -- accel/accel.sh@42 -- # jq -r . 00:09:44.801 -x option must be non-negative. 00:09:44.801 [2024-12-06 21:32:05.096429] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:44.801 accel_perf options: 00:09:44.801 [-h help message] 00:09:44.801 [-q queue depth per core] 00:09:44.801 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:44.801 [-T number of threads per core 00:09:44.801 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:44.801 [-t time in seconds] 00:09:44.801 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:44.801 [ dif_verify, , dif_generate, dif_generate_copy 00:09:44.801 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:44.801 [-l for compress/decompress workloads, name of uncompressed input file 00:09:44.801 [-S for crc32c workload, use this seed value (default 0) 00:09:44.801 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:44.801 [-f for fill workload, use this BYTE value (default 255) 00:09:44.801 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:44.801 [-y verify result if this switch is on] 00:09:44.801 [-a tasks to allocate per core (default: same value as -q)] 00:09:44.801 Can be used to spread operations across a wider range of memory. 00:09:44.801 21:32:05 -- common/autotest_common.sh@653 -- # es=1 00:09:44.801 21:32:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:44.802 21:32:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:44.802 21:32:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:44.802 00:09:44.802 real 0m0.064s 00:09:44.802 user 0m0.033s 00:09:44.802 sys 0m0.039s 00:09:44.802 21:32:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:44.802 ************************************ 00:09:44.802 END TEST accel_negative_buffers 00:09:44.802 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.802 ************************************ 00:09:44.802 21:32:05 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:44.802 21:32:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:44.802 21:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.802 21:32:05 -- common/autotest_common.sh@10 -- # set +x 00:09:44.802 ************************************ 00:09:44.802 START TEST accel_crc32c 00:09:44.802 ************************************ 00:09:44.802 21:32:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:44.802 21:32:05 -- accel/accel.sh@16 -- # local accel_opc 00:09:44.802 21:32:05 -- accel/accel.sh@17 -- # local accel_module 00:09:44.802 21:32:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:44.802 21:32:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:44.802 21:32:05 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.802 21:32:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.802 21:32:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.802 21:32:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.802 21:32:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.802 21:32:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.802 21:32:05 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.802 21:32:05 -- accel/accel.sh@42 -- # jq -r . 00:09:44.802 [2024-12-06 21:32:05.207035] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:44.802 [2024-12-06 21:32:05.207149] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63569 ] 00:09:45.062 [2024-12-06 21:32:05.362171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.062 [2024-12-06 21:32:05.529619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.981 21:32:07 -- accel/accel.sh@18 -- # out=' 00:09:46.981 SPDK Configuration: 00:09:46.981 Core mask: 0x1 00:09:46.981 00:09:46.981 Accel Perf Configuration: 00:09:46.981 Workload Type: crc32c 00:09:46.981 CRC-32C seed: 32 00:09:46.981 Transfer size: 4096 bytes 00:09:46.981 Vector count 1 00:09:46.981 Module: software 00:09:46.981 Queue depth: 32 00:09:46.981 Allocate depth: 32 00:09:46.981 # threads/core: 1 00:09:46.981 Run time: 1 seconds 00:09:46.981 Verify: Yes 00:09:46.981 00:09:46.981 Running for 1 seconds... 00:09:46.981 00:09:46.981 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:46.981 ------------------------------------------------------------------------------------ 00:09:46.981 0,0 449568/s 1756 MiB/s 0 0 00:09:46.981 ==================================================================================== 00:09:46.981 Total 449568/s 1756 MiB/s 0 0' 00:09:46.981 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:46.981 21:32:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:46.981 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:46.981 21:32:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:46.981 21:32:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:46.981 21:32:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:46.981 21:32:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:46.981 21:32:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:46.981 21:32:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:46.981 21:32:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:46.981 21:32:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:46.981 21:32:07 -- accel/accel.sh@42 -- # jq -r . 00:09:47.240 [2024-12-06 21:32:07.484903] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:47.240 [2024-12-06 21:32:07.485044] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63595 ] 00:09:47.240 [2024-12-06 21:32:07.657701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.498 [2024-12-06 21:32:07.825173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val=0x1 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val=crc32c 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val=32 00:09:47.498 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.498 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.498 21:32:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val=software 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@23 -- # accel_module=software 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val=32 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val=32 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val=1 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val=Yes 00:09:47.499 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.499 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.499 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.756 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.756 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.756 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:47.756 21:32:07 -- accel/accel.sh@21 -- # val= 00:09:47.756 21:32:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:47.756 21:32:07 -- accel/accel.sh@20 -- # IFS=: 00:09:47.756 21:32:07 -- accel/accel.sh@20 -- # read -r var val 00:09:49.654 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.654 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.654 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.654 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.654 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.654 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.654 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.654 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.654 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.655 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 21:32:09 -- accel/accel.sh@21 -- # val= 00:09:49.655 21:32:09 -- accel/accel.sh@22 -- # case "$var" in 00:09:49.655 21:32:09 -- accel/accel.sh@20 -- # IFS=: 00:09:49.655 21:32:09 -- accel/accel.sh@20 -- # read -r var val 00:09:49.655 21:32:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:49.655 21:32:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:49.655 21:32:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:49.655 00:09:49.655 real 0m4.587s 00:09:49.655 user 0m4.099s 00:09:49.655 sys 0m0.301s 00:09:49.655 21:32:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:49.655 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:09:49.655 ************************************ 00:09:49.655 END TEST accel_crc32c 00:09:49.655 ************************************ 00:09:49.655 21:32:09 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:49.655 21:32:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:49.655 21:32:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.655 21:32:09 -- common/autotest_common.sh@10 -- # set +x 00:09:49.655 ************************************ 00:09:49.655 START TEST accel_crc32c_C2 00:09:49.655 ************************************ 00:09:49.655 21:32:09 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:49.655 21:32:09 -- accel/accel.sh@16 -- # local accel_opc 00:09:49.655 21:32:09 -- accel/accel.sh@17 -- # local accel_module 00:09:49.655 21:32:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:49.655 21:32:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:49.655 21:32:09 -- accel/accel.sh@12 -- # build_accel_config 00:09:49.655 21:32:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:49.655 21:32:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:49.655 21:32:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:49.655 21:32:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:49.655 21:32:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:49.655 21:32:09 -- accel/accel.sh@41 -- # local IFS=, 00:09:49.655 21:32:09 -- accel/accel.sh@42 -- # jq -r . 00:09:49.655 [2024-12-06 21:32:09.860706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:49.655 [2024-12-06 21:32:09.860866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:09:49.655 [2024-12-06 21:32:10.029913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.912 [2024-12-06 21:32:10.215517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.811 21:32:12 -- accel/accel.sh@18 -- # out=' 00:09:51.811 SPDK Configuration: 00:09:51.811 Core mask: 0x1 00:09:51.811 00:09:51.811 Accel Perf Configuration: 00:09:51.811 Workload Type: crc32c 00:09:51.811 CRC-32C seed: 0 00:09:51.811 Transfer size: 4096 bytes 00:09:51.811 Vector count 2 00:09:51.811 Module: software 00:09:51.811 Queue depth: 32 00:09:51.811 Allocate depth: 32 00:09:51.811 # threads/core: 1 00:09:51.811 Run time: 1 seconds 00:09:51.811 Verify: Yes 00:09:51.811 00:09:51.811 Running for 1 seconds... 00:09:51.811 00:09:51.811 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:51.811 ------------------------------------------------------------------------------------ 00:09:51.811 0,0 333600/s 2606 MiB/s 0 0 00:09:51.811 ==================================================================================== 00:09:51.811 Total 333600/s 1303 MiB/s 0 0' 00:09:51.811 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:51.811 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:51.811 21:32:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:51.811 21:32:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:51.811 21:32:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:51.811 21:32:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:51.811 21:32:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:51.811 21:32:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:51.811 21:32:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:51.811 21:32:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:51.811 21:32:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:51.811 21:32:12 -- accel/accel.sh@42 -- # jq -r . 00:09:51.811 [2024-12-06 21:32:12.269110] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:51.811 [2024-12-06 21:32:12.269308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63673 ] 00:09:52.069 [2024-12-06 21:32:12.438316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.328 [2024-12-06 21:32:12.609114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=0x1 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=crc32c 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=0 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=software 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@23 -- # accel_module=software 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=32 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=32 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=1 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val=Yes 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:52.328 21:32:12 -- accel/accel.sh@21 -- # val= 00:09:52.328 21:32:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # IFS=: 00:09:52.328 21:32:12 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@21 -- # val= 00:09:54.251 21:32:14 -- accel/accel.sh@22 -- # case "$var" in 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # IFS=: 00:09:54.251 21:32:14 -- accel/accel.sh@20 -- # read -r var val 00:09:54.251 21:32:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:54.251 21:32:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:54.251 21:32:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:54.251 00:09:54.251 real 0m4.781s 00:09:54.251 user 0m4.269s 00:09:54.251 sys 0m0.328s 00:09:54.251 21:32:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:54.251 ************************************ 00:09:54.251 END TEST accel_crc32c_C2 00:09:54.251 ************************************ 00:09:54.251 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:09:54.251 21:32:14 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:54.251 21:32:14 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:54.251 21:32:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.251 21:32:14 -- common/autotest_common.sh@10 -- # set +x 00:09:54.251 ************************************ 00:09:54.251 START TEST accel_copy 00:09:54.251 ************************************ 00:09:54.251 21:32:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:09:54.251 21:32:14 -- accel/accel.sh@16 -- # local accel_opc 00:09:54.251 21:32:14 -- accel/accel.sh@17 -- # local accel_module 00:09:54.251 21:32:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:54.251 21:32:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:54.251 21:32:14 -- accel/accel.sh@12 -- # build_accel_config 00:09:54.251 21:32:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:54.251 21:32:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:54.251 21:32:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.251 21:32:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:54.251 21:32:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:54.251 21:32:14 -- accel/accel.sh@41 -- # local IFS=, 00:09:54.251 21:32:14 -- accel/accel.sh@42 -- # jq -r . 00:09:54.251 [2024-12-06 21:32:14.687360] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:54.251 [2024-12-06 21:32:14.687535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63714 ] 00:09:54.510 [2024-12-06 21:32:14.858928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.782 [2024-12-06 21:32:15.082922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.683 21:32:17 -- accel/accel.sh@18 -- # out=' 00:09:56.683 SPDK Configuration: 00:09:56.683 Core mask: 0x1 00:09:56.683 00:09:56.683 Accel Perf Configuration: 00:09:56.683 Workload Type: copy 00:09:56.683 Transfer size: 4096 bytes 00:09:56.683 Vector count 1 00:09:56.683 Module: software 00:09:56.683 Queue depth: 32 00:09:56.683 Allocate depth: 32 00:09:56.683 # threads/core: 1 00:09:56.683 Run time: 1 seconds 00:09:56.683 Verify: Yes 00:09:56.683 00:09:56.683 Running for 1 seconds... 00:09:56.683 00:09:56.683 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:56.683 ------------------------------------------------------------------------------------ 00:09:56.683 0,0 242816/s 948 MiB/s 0 0 00:09:56.683 ==================================================================================== 00:09:56.683 Total 242816/s 948 MiB/s 0 0' 00:09:56.683 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:56.683 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:56.683 21:32:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:56.683 21:32:17 -- accel/accel.sh@12 -- # build_accel_config 00:09:56.683 21:32:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:56.683 21:32:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:56.683 21:32:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:56.683 21:32:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:56.683 21:32:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:56.683 21:32:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:56.683 21:32:17 -- accel/accel.sh@41 -- # local IFS=, 00:09:56.683 21:32:17 -- accel/accel.sh@42 -- # jq -r . 00:09:56.683 [2024-12-06 21:32:17.108188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:56.683 [2024-12-06 21:32:17.108356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63740 ] 00:09:56.942 [2024-12-06 21:32:17.278524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.200 [2024-12-06 21:32:17.462631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=0x1 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=copy 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=software 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@23 -- # accel_module=software 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=32 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=32 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=1 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val=Yes 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:57.200 21:32:17 -- accel/accel.sh@21 -- # val= 00:09:57.200 21:32:17 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # IFS=: 00:09:57.200 21:32:17 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 21:32:19 -- accel/accel.sh@21 -- # val= 00:09:59.100 21:32:19 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # IFS=: 00:09:59.100 21:32:19 -- accel/accel.sh@20 -- # read -r var val 00:09:59.100 ************************************ 00:09:59.100 END TEST accel_copy 00:09:59.100 ************************************ 00:09:59.100 21:32:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:59.100 21:32:19 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:59.100 21:32:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.100 00:09:59.100 real 0m4.813s 00:09:59.100 user 0m4.296s 00:09:59.100 sys 0m0.331s 00:09:59.100 21:32:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:59.100 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:09:59.100 21:32:19 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.100 21:32:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:59.100 21:32:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.100 21:32:19 -- common/autotest_common.sh@10 -- # set +x 00:09:59.100 ************************************ 00:09:59.100 START TEST accel_fill 00:09:59.100 ************************************ 00:09:59.100 21:32:19 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.100 21:32:19 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.100 21:32:19 -- accel/accel.sh@17 -- # local accel_module 00:09:59.100 21:32:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.100 21:32:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:59.100 21:32:19 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.100 21:32:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.100 21:32:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.100 21:32:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.100 21:32:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.100 21:32:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.100 21:32:19 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.100 21:32:19 -- accel/accel.sh@42 -- # jq -r . 00:09:59.100 [2024-12-06 21:32:19.555882] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:59.101 [2024-12-06 21:32:19.556505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63787 ] 00:09:59.358 [2024-12-06 21:32:19.728010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.615 [2024-12-06 21:32:19.906123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.578 21:32:21 -- accel/accel.sh@18 -- # out=' 00:10:01.578 SPDK Configuration: 00:10:01.578 Core mask: 0x1 00:10:01.578 00:10:01.578 Accel Perf Configuration: 00:10:01.578 Workload Type: fill 00:10:01.578 Fill pattern: 0x80 00:10:01.578 Transfer size: 4096 bytes 00:10:01.578 Vector count 1 00:10:01.578 Module: software 00:10:01.578 Queue depth: 64 00:10:01.578 Allocate depth: 64 00:10:01.578 # threads/core: 1 00:10:01.578 Run time: 1 seconds 00:10:01.578 Verify: Yes 00:10:01.578 00:10:01.578 Running for 1 seconds... 00:10:01.578 00:10:01.578 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:01.578 ------------------------------------------------------------------------------------ 00:10:01.578 0,0 388864/s 1519 MiB/s 0 0 00:10:01.578 ==================================================================================== 00:10:01.578 Total 388864/s 1519 MiB/s 0 0' 00:10:01.578 21:32:21 -- accel/accel.sh@20 -- # IFS=: 00:10:01.578 21:32:21 -- accel/accel.sh@20 -- # read -r var val 00:10:01.578 21:32:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.578 21:32:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:10:01.578 21:32:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:01.578 21:32:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:01.578 21:32:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:01.578 21:32:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:01.579 21:32:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:01.579 21:32:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:01.579 21:32:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:01.579 21:32:21 -- accel/accel.sh@42 -- # jq -r . 00:10:01.579 [2024-12-06 21:32:21.944015] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:01.579 [2024-12-06 21:32:21.944184] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63818 ] 00:10:01.837 [2024-12-06 21:32:22.114081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.837 [2024-12-06 21:32:22.280339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=0x1 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=fill 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@24 -- # accel_opc=fill 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=0x80 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=software 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=64 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=64 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=1 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val=Yes 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:02.096 21:32:22 -- accel/accel.sh@21 -- # val= 00:10:02.096 21:32:22 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # IFS=: 00:10:02.096 21:32:22 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@21 -- # val= 00:10:03.999 21:32:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # IFS=: 00:10:03.999 21:32:24 -- accel/accel.sh@20 -- # read -r var val 00:10:03.999 21:32:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:03.999 21:32:24 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:10:03.999 21:32:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:03.999 00:10:03.999 real 0m4.785s 00:10:03.999 user 0m4.253s 00:10:03.999 sys 0m0.348s 00:10:03.999 21:32:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:03.999 21:32:24 -- common/autotest_common.sh@10 -- # set +x 00:10:03.999 ************************************ 00:10:03.999 END TEST accel_fill 00:10:03.999 ************************************ 00:10:03.999 21:32:24 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:10:03.999 21:32:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:03.999 21:32:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:03.999 21:32:24 -- common/autotest_common.sh@10 -- # set +x 00:10:03.999 ************************************ 00:10:03.999 START TEST accel_copy_crc32c 00:10:03.999 ************************************ 00:10:03.999 21:32:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:10:03.999 21:32:24 -- accel/accel.sh@16 -- # local accel_opc 00:10:03.999 21:32:24 -- accel/accel.sh@17 -- # local accel_module 00:10:03.999 21:32:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:03.999 21:32:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:03.999 21:32:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:03.999 21:32:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:03.999 21:32:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:03.999 21:32:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:03.999 21:32:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:03.999 21:32:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:03.999 21:32:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:03.999 21:32:24 -- accel/accel.sh@42 -- # jq -r . 00:10:03.999 [2024-12-06 21:32:24.381074] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:03.999 [2024-12-06 21:32:24.381237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63865 ] 00:10:04.258 [2024-12-06 21:32:24.547310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.258 [2024-12-06 21:32:24.732857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.787 21:32:26 -- accel/accel.sh@18 -- # out=' 00:10:06.787 SPDK Configuration: 00:10:06.787 Core mask: 0x1 00:10:06.787 00:10:06.787 Accel Perf Configuration: 00:10:06.787 Workload Type: copy_crc32c 00:10:06.787 CRC-32C seed: 0 00:10:06.787 Vector size: 4096 bytes 00:10:06.787 Transfer size: 4096 bytes 00:10:06.787 Vector count 1 00:10:06.787 Module: software 00:10:06.787 Queue depth: 32 00:10:06.787 Allocate depth: 32 00:10:06.787 # threads/core: 1 00:10:06.787 Run time: 1 seconds 00:10:06.787 Verify: Yes 00:10:06.787 00:10:06.787 Running for 1 seconds... 00:10:06.787 00:10:06.787 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:06.787 ------------------------------------------------------------------------------------ 00:10:06.787 0,0 197088/s 769 MiB/s 0 0 00:10:06.787 ==================================================================================== 00:10:06.787 Total 197088/s 769 MiB/s 0 0' 00:10:06.787 21:32:26 -- accel/accel.sh@20 -- # IFS=: 00:10:06.787 21:32:26 -- accel/accel.sh@20 -- # read -r var val 00:10:06.787 21:32:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:10:06.787 21:32:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:10:06.787 21:32:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.787 21:32:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.787 21:32:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.787 21:32:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.787 21:32:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.787 21:32:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.787 21:32:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.787 21:32:26 -- accel/accel.sh@42 -- # jq -r . 00:10:06.787 [2024-12-06 21:32:26.844180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:06.787 [2024-12-06 21:32:26.844351] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63891 ] 00:10:06.787 [2024-12-06 21:32:27.016518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.787 [2024-12-06 21:32:27.207697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=0x1 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=0 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=software 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@23 -- # accel_module=software 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=32 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=32 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=1 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val=Yes 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:07.045 21:32:27 -- accel/accel.sh@21 -- # val= 00:10:07.045 21:32:27 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # IFS=: 00:10:07.045 21:32:27 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@21 -- # val= 00:10:08.941 21:32:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # IFS=: 00:10:08.941 21:32:29 -- accel/accel.sh@20 -- # read -r var val 00:10:08.941 21:32:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:08.941 21:32:29 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:08.941 21:32:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:08.941 00:10:08.941 real 0m4.930s 00:10:08.941 user 0m4.390s 00:10:08.941 sys 0m0.353s 00:10:08.941 21:32:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:08.941 ************************************ 00:10:08.941 END TEST accel_copy_crc32c 00:10:08.941 ************************************ 00:10:08.941 21:32:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.941 21:32:29 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:10:08.941 21:32:29 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:08.941 21:32:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.942 21:32:29 -- common/autotest_common.sh@10 -- # set +x 00:10:08.942 ************************************ 00:10:08.942 START TEST accel_copy_crc32c_C2 00:10:08.942 ************************************ 00:10:08.942 21:32:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:10:08.942 21:32:29 -- accel/accel.sh@16 -- # local accel_opc 00:10:08.942 21:32:29 -- accel/accel.sh@17 -- # local accel_module 00:10:08.942 21:32:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:08.942 21:32:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:08.942 21:32:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:08.942 21:32:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:08.942 21:32:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:08.942 21:32:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:08.942 21:32:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:08.942 21:32:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:08.942 21:32:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:08.942 21:32:29 -- accel/accel.sh@42 -- # jq -r . 00:10:08.942 [2024-12-06 21:32:29.359741] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:08.942 [2024-12-06 21:32:29.359899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63932 ] 00:10:09.199 [2024-12-06 21:32:29.532780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.457 [2024-12-06 21:32:29.724381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.355 21:32:31 -- accel/accel.sh@18 -- # out=' 00:10:11.355 SPDK Configuration: 00:10:11.355 Core mask: 0x1 00:10:11.355 00:10:11.355 Accel Perf Configuration: 00:10:11.355 Workload Type: copy_crc32c 00:10:11.355 CRC-32C seed: 0 00:10:11.355 Vector size: 4096 bytes 00:10:11.355 Transfer size: 8192 bytes 00:10:11.355 Vector count 2 00:10:11.355 Module: software 00:10:11.355 Queue depth: 32 00:10:11.355 Allocate depth: 32 00:10:11.355 # threads/core: 1 00:10:11.355 Run time: 1 seconds 00:10:11.355 Verify: Yes 00:10:11.355 00:10:11.355 Running for 1 seconds... 00:10:11.355 00:10:11.355 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:11.355 ------------------------------------------------------------------------------------ 00:10:11.355 0,0 136768/s 1068 MiB/s 0 0 00:10:11.355 ==================================================================================== 00:10:11.355 Total 136768/s 534 MiB/s 0 0' 00:10:11.355 21:32:31 -- accel/accel.sh@20 -- # IFS=: 00:10:11.355 21:32:31 -- accel/accel.sh@20 -- # read -r var val 00:10:11.356 21:32:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:10:11.356 21:32:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:10:11.356 21:32:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.356 21:32:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.356 21:32:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.356 21:32:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.356 21:32:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.356 21:32:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.356 21:32:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.356 21:32:31 -- accel/accel.sh@42 -- # jq -r . 00:10:11.356 [2024-12-06 21:32:31.833099] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:11.356 [2024-12-06 21:32:31.833241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63969 ] 00:10:11.614 [2024-12-06 21:32:31.993712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.873 [2024-12-06 21:32:32.184245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.131 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=0x1 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=copy_crc32c 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=0 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val='8192 bytes' 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=software 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=32 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=32 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=1 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val=Yes 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:12.132 21:32:32 -- accel/accel.sh@21 -- # val= 00:10:12.132 21:32:32 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # IFS=: 00:10:12.132 21:32:32 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@21 -- # val= 00:10:14.032 21:32:34 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # IFS=: 00:10:14.032 21:32:34 -- accel/accel.sh@20 -- # read -r var val 00:10:14.032 21:32:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.032 21:32:34 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:10:14.032 21:32:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.032 00:10:14.032 real 0m4.888s 00:10:14.032 user 0m4.384s 00:10:14.032 sys 0m0.319s 00:10:14.032 ************************************ 00:10:14.032 END TEST accel_copy_crc32c_C2 00:10:14.032 ************************************ 00:10:14.032 21:32:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:14.032 21:32:34 -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 21:32:34 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:10:14.032 21:32:34 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:14.032 21:32:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:14.032 21:32:34 -- common/autotest_common.sh@10 -- # set +x 00:10:14.032 ************************************ 00:10:14.033 START TEST accel_dualcast 00:10:14.033 ************************************ 00:10:14.033 21:32:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:10:14.033 21:32:34 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.033 21:32:34 -- accel/accel.sh@17 -- # local accel_module 00:10:14.033 21:32:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:10:14.033 21:32:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:14.033 21:32:34 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.033 21:32:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.033 21:32:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.033 21:32:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.033 21:32:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.033 21:32:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.033 21:32:34 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.033 21:32:34 -- accel/accel.sh@42 -- # jq -r . 00:10:14.033 [2024-12-06 21:32:34.300349] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:14.033 [2024-12-06 21:32:34.300720] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64010 ] 00:10:14.033 [2024-12-06 21:32:34.471497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.291 [2024-12-06 21:32:34.649628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.191 21:32:36 -- accel/accel.sh@18 -- # out=' 00:10:16.191 SPDK Configuration: 00:10:16.191 Core mask: 0x1 00:10:16.191 00:10:16.191 Accel Perf Configuration: 00:10:16.191 Workload Type: dualcast 00:10:16.191 Transfer size: 4096 bytes 00:10:16.191 Vector count 1 00:10:16.191 Module: software 00:10:16.191 Queue depth: 32 00:10:16.191 Allocate depth: 32 00:10:16.191 # threads/core: 1 00:10:16.191 Run time: 1 seconds 00:10:16.191 Verify: Yes 00:10:16.191 00:10:16.191 Running for 1 seconds... 00:10:16.191 00:10:16.191 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:16.191 ------------------------------------------------------------------------------------ 00:10:16.191 0,0 276576/s 1080 MiB/s 0 0 00:10:16.191 ==================================================================================== 00:10:16.191 Total 276576/s 1080 MiB/s 0 0' 00:10:16.191 21:32:36 -- accel/accel.sh@20 -- # IFS=: 00:10:16.191 21:32:36 -- accel/accel.sh@20 -- # read -r var val 00:10:16.191 21:32:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:10:16.191 21:32:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:10:16.191 21:32:36 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.191 21:32:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.191 21:32:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.191 21:32:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.191 21:32:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.191 21:32:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.191 21:32:36 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.191 21:32:36 -- accel/accel.sh@42 -- # jq -r . 00:10:16.448 [2024-12-06 21:32:36.708578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:16.448 [2024-12-06 21:32:36.708742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64042 ] 00:10:16.448 [2024-12-06 21:32:36.879610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.705 [2024-12-06 21:32:37.063711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=0x1 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=dualcast 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=software 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@23 -- # accel_module=software 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=32 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=32 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=1 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val=Yes 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:16.963 21:32:37 -- accel/accel.sh@21 -- # val= 00:10:16.963 21:32:37 -- accel/accel.sh@22 -- # case "$var" in 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # IFS=: 00:10:16.963 21:32:37 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@21 -- # val= 00:10:18.880 21:32:39 -- accel/accel.sh@22 -- # case "$var" in 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # IFS=: 00:10:18.880 21:32:39 -- accel/accel.sh@20 -- # read -r var val 00:10:18.880 21:32:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:18.880 21:32:39 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:10:18.880 21:32:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:18.880 00:10:18.880 real 0m4.789s 00:10:18.880 user 0m4.254s 00:10:18.880 sys 0m0.347s 00:10:18.880 21:32:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:18.880 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:10:18.880 ************************************ 00:10:18.880 END TEST accel_dualcast 00:10:18.880 ************************************ 00:10:18.880 21:32:39 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:10:18.880 21:32:39 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:18.880 21:32:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.880 21:32:39 -- common/autotest_common.sh@10 -- # set +x 00:10:18.880 ************************************ 00:10:18.880 START TEST accel_compare 00:10:18.880 ************************************ 00:10:18.880 21:32:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:10:18.880 21:32:39 -- accel/accel.sh@16 -- # local accel_opc 00:10:18.880 21:32:39 -- accel/accel.sh@17 -- # local accel_module 00:10:18.880 21:32:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:10:18.880 21:32:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:18.880 21:32:39 -- accel/accel.sh@12 -- # build_accel_config 00:10:18.880 21:32:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:18.880 21:32:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:18.880 21:32:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:18.880 21:32:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:18.880 21:32:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:18.880 21:32:39 -- accel/accel.sh@41 -- # local IFS=, 00:10:18.880 21:32:39 -- accel/accel.sh@42 -- # jq -r . 00:10:18.880 [2024-12-06 21:32:39.151627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:18.880 [2024-12-06 21:32:39.151855] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64083 ] 00:10:18.880 [2024-12-06 21:32:39.324165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.157 [2024-12-06 21:32:39.511648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.680 21:32:41 -- accel/accel.sh@18 -- # out=' 00:10:21.680 SPDK Configuration: 00:10:21.680 Core mask: 0x1 00:10:21.680 00:10:21.680 Accel Perf Configuration: 00:10:21.680 Workload Type: compare 00:10:21.680 Transfer size: 4096 bytes 00:10:21.680 Vector count 1 00:10:21.680 Module: software 00:10:21.680 Queue depth: 32 00:10:21.680 Allocate depth: 32 00:10:21.680 # threads/core: 1 00:10:21.680 Run time: 1 seconds 00:10:21.680 Verify: Yes 00:10:21.680 00:10:21.680 Running for 1 seconds... 00:10:21.680 00:10:21.680 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.680 ------------------------------------------------------------------------------------ 00:10:21.680 0,0 366368/s 1431 MiB/s 0 0 00:10:21.680 ==================================================================================== 00:10:21.680 Total 366368/s 1431 MiB/s 0 0' 00:10:21.680 21:32:41 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:41 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:21.680 21:32:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:21.680 21:32:41 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.680 21:32:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.680 21:32:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.680 21:32:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.680 21:32:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.680 21:32:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.680 21:32:41 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.680 21:32:41 -- accel/accel.sh@42 -- # jq -r . 00:10:21.680 [2024-12-06 21:32:41.589760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:21.680 [2024-12-06 21:32:41.589910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64113 ] 00:10:21.680 [2024-12-06 21:32:41.757593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.680 [2024-12-06 21:32:41.927946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=0x1 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=compare 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=software 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@23 -- # accel_module=software 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=32 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=32 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=1 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val=Yes 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:21.680 21:32:42 -- accel/accel.sh@21 -- # val= 00:10:21.680 21:32:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # IFS=: 00:10:21.680 21:32:42 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@21 -- # val= 00:10:23.582 21:32:43 -- accel/accel.sh@22 -- # case "$var" in 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # IFS=: 00:10:23.582 21:32:43 -- accel/accel.sh@20 -- # read -r var val 00:10:23.582 21:32:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:23.582 ************************************ 00:10:23.582 END TEST accel_compare 00:10:23.582 ************************************ 00:10:23.583 21:32:43 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:23.583 21:32:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:23.583 00:10:23.583 real 0m4.779s 00:10:23.583 user 0m4.249s 00:10:23.583 sys 0m0.341s 00:10:23.583 21:32:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:23.583 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:10:23.583 21:32:43 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:23.583 21:32:43 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:23.583 21:32:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.583 21:32:43 -- common/autotest_common.sh@10 -- # set +x 00:10:23.583 ************************************ 00:10:23.583 START TEST accel_xor 00:10:23.583 ************************************ 00:10:23.583 21:32:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:10:23.583 21:32:43 -- accel/accel.sh@16 -- # local accel_opc 00:10:23.583 21:32:43 -- accel/accel.sh@17 -- # local accel_module 00:10:23.583 21:32:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:23.583 21:32:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:23.583 21:32:43 -- accel/accel.sh@12 -- # build_accel_config 00:10:23.583 21:32:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:23.583 21:32:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:23.583 21:32:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.583 21:32:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:23.583 21:32:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:23.583 21:32:43 -- accel/accel.sh@41 -- # local IFS=, 00:10:23.583 21:32:43 -- accel/accel.sh@42 -- # jq -r . 00:10:23.583 [2024-12-06 21:32:43.960239] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:23.583 [2024-12-06 21:32:43.960362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64161 ] 00:10:23.840 [2024-12-06 21:32:44.116387] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.840 [2024-12-06 21:32:44.282889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.737 21:32:46 -- accel/accel.sh@18 -- # out=' 00:10:25.737 SPDK Configuration: 00:10:25.737 Core mask: 0x1 00:10:25.737 00:10:25.737 Accel Perf Configuration: 00:10:25.737 Workload Type: xor 00:10:25.737 Source buffers: 2 00:10:25.737 Transfer size: 4096 bytes 00:10:25.737 Vector count 1 00:10:25.737 Module: software 00:10:25.737 Queue depth: 32 00:10:25.737 Allocate depth: 32 00:10:25.737 # threads/core: 1 00:10:25.737 Run time: 1 seconds 00:10:25.737 Verify: Yes 00:10:25.737 00:10:25.737 Running for 1 seconds... 00:10:25.737 00:10:25.737 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:25.737 ------------------------------------------------------------------------------------ 00:10:25.737 0,0 213184/s 832 MiB/s 0 0 00:10:25.737 ==================================================================================== 00:10:25.737 Total 213184/s 832 MiB/s 0 0' 00:10:25.737 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:25.738 21:32:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:25.738 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:25.738 21:32:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:25.738 21:32:46 -- accel/accel.sh@12 -- # build_accel_config 00:10:25.738 21:32:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:25.738 21:32:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:25.738 21:32:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:25.738 21:32:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:25.738 21:32:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:25.738 21:32:46 -- accel/accel.sh@41 -- # local IFS=, 00:10:25.738 21:32:46 -- accel/accel.sh@42 -- # jq -r . 00:10:25.995 [2024-12-06 21:32:46.264589] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:25.995 [2024-12-06 21:32:46.264756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64187 ] 00:10:25.995 [2024-12-06 21:32:46.434289] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.252 [2024-12-06 21:32:46.593995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.252 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=0x1 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=xor 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=2 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=software 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@23 -- # accel_module=software 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=32 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=32 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=1 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val=Yes 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:26.508 21:32:46 -- accel/accel.sh@21 -- # val= 00:10:26.508 21:32:46 -- accel/accel.sh@22 -- # case "$var" in 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # IFS=: 00:10:26.508 21:32:46 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@21 -- # val= 00:10:28.404 21:32:48 -- accel/accel.sh@22 -- # case "$var" in 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # IFS=: 00:10:28.404 21:32:48 -- accel/accel.sh@20 -- # read -r var val 00:10:28.404 21:32:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:28.404 21:32:48 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:28.404 21:32:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:28.404 00:10:28.404 real 0m4.597s 00:10:28.404 user 0m4.091s 00:10:28.404 sys 0m0.320s 00:10:28.404 21:32:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:28.404 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:10:28.404 ************************************ 00:10:28.404 END TEST accel_xor 00:10:28.404 ************************************ 00:10:28.404 21:32:48 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:28.404 21:32:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:28.404 21:32:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:28.404 21:32:48 -- common/autotest_common.sh@10 -- # set +x 00:10:28.404 ************************************ 00:10:28.404 START TEST accel_xor 00:10:28.404 ************************************ 00:10:28.404 21:32:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:10:28.404 21:32:48 -- accel/accel.sh@16 -- # local accel_opc 00:10:28.404 21:32:48 -- accel/accel.sh@17 -- # local accel_module 00:10:28.404 21:32:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:28.404 21:32:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:28.404 21:32:48 -- accel/accel.sh@12 -- # build_accel_config 00:10:28.404 21:32:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:28.404 21:32:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:28.404 21:32:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:28.404 21:32:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:28.404 21:32:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:28.404 21:32:48 -- accel/accel.sh@41 -- # local IFS=, 00:10:28.404 21:32:48 -- accel/accel.sh@42 -- # jq -r . 00:10:28.404 [2024-12-06 21:32:48.608956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:28.405 [2024-12-06 21:32:48.609103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64228 ] 00:10:28.405 [2024-12-06 21:32:48.779267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.662 [2024-12-06 21:32:48.945257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.561 21:32:50 -- accel/accel.sh@18 -- # out=' 00:10:30.561 SPDK Configuration: 00:10:30.561 Core mask: 0x1 00:10:30.561 00:10:30.561 Accel Perf Configuration: 00:10:30.561 Workload Type: xor 00:10:30.561 Source buffers: 3 00:10:30.561 Transfer size: 4096 bytes 00:10:30.561 Vector count 1 00:10:30.561 Module: software 00:10:30.561 Queue depth: 32 00:10:30.561 Allocate depth: 32 00:10:30.561 # threads/core: 1 00:10:30.561 Run time: 1 seconds 00:10:30.561 Verify: Yes 00:10:30.561 00:10:30.561 Running for 1 seconds... 00:10:30.561 00:10:30.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:30.561 ------------------------------------------------------------------------------------ 00:10:30.561 0,0 206496/s 806 MiB/s 0 0 00:10:30.561 ==================================================================================== 00:10:30.561 Total 206496/s 806 MiB/s 0 0' 00:10:30.561 21:32:50 -- accel/accel.sh@20 -- # IFS=: 00:10:30.561 21:32:50 -- accel/accel.sh@20 -- # read -r var val 00:10:30.561 21:32:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:30.561 21:32:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:30.561 21:32:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:30.561 21:32:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:30.561 21:32:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:30.561 21:32:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:30.561 21:32:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:30.561 21:32:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:30.561 21:32:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:30.561 21:32:50 -- accel/accel.sh@42 -- # jq -r . 00:10:30.561 [2024-12-06 21:32:50.914188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:30.561 [2024-12-06 21:32:50.914344] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64260 ] 00:10:30.819 [2024-12-06 21:32:51.082523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.819 [2024-12-06 21:32:51.242355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=0x1 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=xor 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=3 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=software 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=32 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=32 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=1 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val=Yes 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:31.077 21:32:51 -- accel/accel.sh@21 -- # val= 00:10:31.077 21:32:51 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # IFS=: 00:10:31.077 21:32:51 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@21 -- # val= 00:10:32.979 21:32:53 -- accel/accel.sh@22 -- # case "$var" in 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # IFS=: 00:10:32.979 21:32:53 -- accel/accel.sh@20 -- # read -r var val 00:10:32.979 21:32:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:32.979 21:32:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:32.979 21:32:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:32.979 00:10:32.979 real 0m4.615s 00:10:32.979 user 0m4.085s 00:10:32.979 sys 0m0.345s 00:10:32.979 21:32:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:32.979 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:10:32.979 ************************************ 00:10:32.979 END TEST accel_xor 00:10:32.979 ************************************ 00:10:32.979 21:32:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:32.979 21:32:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:32.979 21:32:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:32.979 21:32:53 -- common/autotest_common.sh@10 -- # set +x 00:10:32.979 ************************************ 00:10:32.979 START TEST accel_dif_verify 00:10:32.979 ************************************ 00:10:32.979 21:32:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:10:32.979 21:32:53 -- accel/accel.sh@16 -- # local accel_opc 00:10:32.979 21:32:53 -- accel/accel.sh@17 -- # local accel_module 00:10:32.979 21:32:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:32.979 21:32:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:32.979 21:32:53 -- accel/accel.sh@12 -- # build_accel_config 00:10:32.979 21:32:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:32.979 21:32:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:32.979 21:32:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:32.979 21:32:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:32.979 21:32:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:32.979 21:32:53 -- accel/accel.sh@41 -- # local IFS=, 00:10:32.979 21:32:53 -- accel/accel.sh@42 -- # jq -r . 00:10:32.979 [2024-12-06 21:32:53.273934] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:32.979 [2024-12-06 21:32:53.274094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64301 ] 00:10:32.979 [2024-12-06 21:32:53.445310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.237 [2024-12-06 21:32:53.610277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.131 21:32:55 -- accel/accel.sh@18 -- # out=' 00:10:35.131 SPDK Configuration: 00:10:35.131 Core mask: 0x1 00:10:35.131 00:10:35.131 Accel Perf Configuration: 00:10:35.131 Workload Type: dif_verify 00:10:35.131 Vector size: 4096 bytes 00:10:35.131 Transfer size: 4096 bytes 00:10:35.131 Block size: 512 bytes 00:10:35.131 Metadata size: 8 bytes 00:10:35.131 Vector count 1 00:10:35.131 Module: software 00:10:35.131 Queue depth: 32 00:10:35.131 Allocate depth: 32 00:10:35.131 # threads/core: 1 00:10:35.131 Run time: 1 seconds 00:10:35.131 Verify: No 00:10:35.131 00:10:35.131 Running for 1 seconds... 00:10:35.131 00:10:35.131 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:35.131 ------------------------------------------------------------------------------------ 00:10:35.131 0,0 102080/s 404 MiB/s 0 0 00:10:35.131 ==================================================================================== 00:10:35.131 Total 102080/s 398 MiB/s 0 0' 00:10:35.131 21:32:55 -- accel/accel.sh@20 -- # IFS=: 00:10:35.131 21:32:55 -- accel/accel.sh@20 -- # read -r var val 00:10:35.131 21:32:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:35.131 21:32:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:35.131 21:32:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.131 21:32:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.131 21:32:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.131 21:32:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.131 21:32:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.131 21:32:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.131 21:32:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.131 21:32:55 -- accel/accel.sh@42 -- # jq -r . 00:10:35.131 [2024-12-06 21:32:55.578316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:35.131 [2024-12-06 21:32:55.578495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64327 ] 00:10:35.388 [2024-12-06 21:32:55.746155] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.644 [2024-12-06 21:32:55.925948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.644 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.644 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.644 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.644 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.644 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.644 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=0x1 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=dif_verify 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=software 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@23 -- # accel_module=software 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=32 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=32 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=1 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val=No 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:35.645 21:32:56 -- accel/accel.sh@21 -- # val= 00:10:35.645 21:32:56 -- accel/accel.sh@22 -- # case "$var" in 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # IFS=: 00:10:35.645 21:32:56 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@21 -- # val= 00:10:37.540 21:32:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # IFS=: 00:10:37.540 21:32:57 -- accel/accel.sh@20 -- # read -r var val 00:10:37.540 21:32:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:37.541 21:32:57 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:37.541 21:32:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:37.541 00:10:37.541 real 0m4.619s 00:10:37.541 user 0m4.093s 00:10:37.541 sys 0m0.342s 00:10:37.541 21:32:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:37.541 ************************************ 00:10:37.541 END TEST accel_dif_verify 00:10:37.541 ************************************ 00:10:37.541 21:32:57 -- common/autotest_common.sh@10 -- # set +x 00:10:37.541 21:32:57 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:37.541 21:32:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:37.541 21:32:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.541 21:32:57 -- common/autotest_common.sh@10 -- # set +x 00:10:37.541 ************************************ 00:10:37.541 START TEST accel_dif_generate 00:10:37.541 ************************************ 00:10:37.541 21:32:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:10:37.541 21:32:57 -- accel/accel.sh@16 -- # local accel_opc 00:10:37.541 21:32:57 -- accel/accel.sh@17 -- # local accel_module 00:10:37.541 21:32:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:37.541 21:32:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:37.541 21:32:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:37.541 21:32:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:37.541 21:32:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:37.541 21:32:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:37.541 21:32:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:37.541 21:32:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:37.541 21:32:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:37.541 21:32:57 -- accel/accel.sh@42 -- # jq -r . 00:10:37.541 [2024-12-06 21:32:57.941999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:37.541 [2024-12-06 21:32:57.942164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64369 ] 00:10:37.799 [2024-12-06 21:32:58.114637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.799 [2024-12-06 21:32:58.279103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.333 21:33:00 -- accel/accel.sh@18 -- # out=' 00:10:40.333 SPDK Configuration: 00:10:40.333 Core mask: 0x1 00:10:40.333 00:10:40.333 Accel Perf Configuration: 00:10:40.333 Workload Type: dif_generate 00:10:40.333 Vector size: 4096 bytes 00:10:40.333 Transfer size: 4096 bytes 00:10:40.333 Block size: 512 bytes 00:10:40.333 Metadata size: 8 bytes 00:10:40.333 Vector count 1 00:10:40.333 Module: software 00:10:40.333 Queue depth: 32 00:10:40.333 Allocate depth: 32 00:10:40.333 # threads/core: 1 00:10:40.333 Run time: 1 seconds 00:10:40.333 Verify: No 00:10:40.333 00:10:40.333 Running for 1 seconds... 00:10:40.333 00:10:40.333 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:40.333 ------------------------------------------------------------------------------------ 00:10:40.333 0,0 123680/s 490 MiB/s 0 0 00:10:40.333 ==================================================================================== 00:10:40.333 Total 123680/s 483 MiB/s 0 0' 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:40.333 21:33:00 -- accel/accel.sh@12 -- # build_accel_config 00:10:40.333 21:33:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:40.333 21:33:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:40.333 21:33:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:40.333 21:33:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:40.333 21:33:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:40.333 21:33:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:40.333 21:33:00 -- accel/accel.sh@41 -- # local IFS=, 00:10:40.333 21:33:00 -- accel/accel.sh@42 -- # jq -r . 00:10:40.333 [2024-12-06 21:33:00.263413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:40.333 [2024-12-06 21:33:00.263614] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64405 ] 00:10:40.333 [2024-12-06 21:33:00.438147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.333 [2024-12-06 21:33:00.650742] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val=0x1 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val=dif_generate 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.333 21:33:00 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:40.333 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.333 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val=software 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@23 -- # accel_module=software 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val=32 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val=32 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val=1 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val=No 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:40.334 21:33:00 -- accel/accel.sh@21 -- # val= 00:10:40.334 21:33:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # IFS=: 00:10:40.334 21:33:00 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@21 -- # val= 00:10:42.278 21:33:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # IFS=: 00:10:42.278 21:33:02 -- accel/accel.sh@20 -- # read -r var val 00:10:42.278 21:33:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:42.278 21:33:02 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:42.278 21:33:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:42.278 00:10:42.278 real 0m4.719s 00:10:42.278 user 0m4.200s 00:10:42.278 sys 0m0.337s 00:10:42.278 21:33:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:42.278 ************************************ 00:10:42.278 END TEST accel_dif_generate 00:10:42.278 ************************************ 00:10:42.278 21:33:02 -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 21:33:02 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:42.278 21:33:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:42.278 21:33:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.278 21:33:02 -- common/autotest_common.sh@10 -- # set +x 00:10:42.278 ************************************ 00:10:42.278 START TEST accel_dif_generate_copy 00:10:42.278 ************************************ 00:10:42.278 21:33:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:10:42.278 21:33:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:42.278 21:33:02 -- accel/accel.sh@17 -- # local accel_module 00:10:42.278 21:33:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:42.278 21:33:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:42.278 21:33:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:42.278 21:33:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:42.278 21:33:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:42.278 21:33:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:42.278 21:33:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:42.278 21:33:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:42.278 21:33:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:42.278 21:33:02 -- accel/accel.sh@42 -- # jq -r . 00:10:42.278 [2024-12-06 21:33:02.709352] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:42.278 [2024-12-06 21:33:02.709731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64449 ] 00:10:42.537 [2024-12-06 21:33:02.879844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.795 [2024-12-06 21:33:03.113527] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.692 21:33:05 -- accel/accel.sh@18 -- # out=' 00:10:44.692 SPDK Configuration: 00:10:44.692 Core mask: 0x1 00:10:44.692 00:10:44.692 Accel Perf Configuration: 00:10:44.692 Workload Type: dif_generate_copy 00:10:44.692 Vector size: 4096 bytes 00:10:44.692 Transfer size: 4096 bytes 00:10:44.692 Vector count 1 00:10:44.692 Module: software 00:10:44.692 Queue depth: 32 00:10:44.692 Allocate depth: 32 00:10:44.692 # threads/core: 1 00:10:44.692 Run time: 1 seconds 00:10:44.692 Verify: No 00:10:44.692 00:10:44.692 Running for 1 seconds... 00:10:44.692 00:10:44.692 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:44.692 ------------------------------------------------------------------------------------ 00:10:44.692 0,0 89312/s 354 MiB/s 0 0 00:10:44.692 ==================================================================================== 00:10:44.692 Total 89312/s 348 MiB/s 0 0' 00:10:44.692 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:44.692 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:44.692 21:33:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:44.692 21:33:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:44.692 21:33:05 -- accel/accel.sh@12 -- # build_accel_config 00:10:44.692 21:33:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:44.692 21:33:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:44.692 21:33:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:44.692 21:33:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:44.692 21:33:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:44.692 21:33:05 -- accel/accel.sh@41 -- # local IFS=, 00:10:44.692 21:33:05 -- accel/accel.sh@42 -- # jq -r . 00:10:44.692 [2024-12-06 21:33:05.089831] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:44.692 [2024-12-06 21:33:05.090008] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64475 ] 00:10:44.950 [2024-12-06 21:33:05.257932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.950 [2024-12-06 21:33:05.417376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val=0x1 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val=software 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@23 -- # accel_module=software 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val=32 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.208 21:33:05 -- accel/accel.sh@21 -- # val=32 00:10:45.208 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.208 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.209 21:33:05 -- accel/accel.sh@21 -- # val=1 00:10:45.209 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.209 21:33:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:45.209 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.209 21:33:05 -- accel/accel.sh@21 -- # val=No 00:10:45.209 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.209 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.209 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:45.209 21:33:05 -- accel/accel.sh@21 -- # val= 00:10:45.209 21:33:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # IFS=: 00:10:45.209 21:33:05 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@21 -- # val= 00:10:47.108 21:33:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # IFS=: 00:10:47.108 21:33:07 -- accel/accel.sh@20 -- # read -r var val 00:10:47.108 21:33:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:47.108 21:33:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:47.108 21:33:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:47.108 00:10:47.108 real 0m4.688s 00:10:47.108 user 0m4.189s 00:10:47.108 sys 0m0.314s 00:10:47.108 21:33:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:47.108 ************************************ 00:10:47.108 END TEST accel_dif_generate_copy 00:10:47.108 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:10:47.108 ************************************ 00:10:47.108 21:33:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:47.108 21:33:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.108 21:33:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:47.108 21:33:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.108 21:33:07 -- common/autotest_common.sh@10 -- # set +x 00:10:47.108 ************************************ 00:10:47.108 START TEST accel_comp 00:10:47.108 ************************************ 00:10:47.108 21:33:07 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.108 21:33:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:47.108 21:33:07 -- accel/accel.sh@17 -- # local accel_module 00:10:47.108 21:33:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.108 21:33:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:47.108 21:33:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:47.108 21:33:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:47.108 21:33:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:47.108 21:33:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:47.108 21:33:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:47.108 21:33:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:47.108 21:33:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:47.108 21:33:07 -- accel/accel.sh@42 -- # jq -r . 00:10:47.108 [2024-12-06 21:33:07.450435] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:47.108 [2024-12-06 21:33:07.450641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64522 ] 00:10:47.367 [2024-12-06 21:33:07.623517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.367 [2024-12-06 21:33:07.784489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.269 21:33:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:49.269 00:10:49.269 SPDK Configuration: 00:10:49.269 Core mask: 0x1 00:10:49.269 00:10:49.269 Accel Perf Configuration: 00:10:49.269 Workload Type: compress 00:10:49.269 Transfer size: 4096 bytes 00:10:49.269 Vector count 1 00:10:49.269 Module: software 00:10:49.269 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.269 Queue depth: 32 00:10:49.269 Allocate depth: 32 00:10:49.269 # threads/core: 1 00:10:49.269 Run time: 1 seconds 00:10:49.269 Verify: No 00:10:49.269 00:10:49.269 Running for 1 seconds... 00:10:49.269 00:10:49.269 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:49.269 ------------------------------------------------------------------------------------ 00:10:49.269 0,0 51552/s 214 MiB/s 0 0 00:10:49.269 ==================================================================================== 00:10:49.269 Total 51552/s 201 MiB/s 0 0' 00:10:49.269 21:33:09 -- accel/accel.sh@20 -- # IFS=: 00:10:49.269 21:33:09 -- accel/accel.sh@20 -- # read -r var val 00:10:49.270 21:33:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.270 21:33:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:49.270 21:33:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:49.270 21:33:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:49.270 21:33:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:49.270 21:33:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:49.270 21:33:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:49.270 21:33:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:49.270 21:33:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:49.270 21:33:09 -- accel/accel.sh@42 -- # jq -r . 00:10:49.528 [2024-12-06 21:33:09.791250] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:49.528 [2024-12-06 21:33:09.791457] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64548 ] 00:10:49.528 [2024-12-06 21:33:09.963045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.786 [2024-12-06 21:33:10.139840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=0x1 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=compress 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=software 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@23 -- # accel_module=software 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=32 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=32 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=1 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val=No 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:50.045 21:33:10 -- accel/accel.sh@21 -- # val= 00:10:50.045 21:33:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # IFS=: 00:10:50.045 21:33:10 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@21 -- # val= 00:10:51.966 21:33:12 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # IFS=: 00:10:51.966 21:33:12 -- accel/accel.sh@20 -- # read -r var val 00:10:51.966 21:33:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:51.966 21:33:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:51.966 21:33:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:51.966 00:10:51.966 real 0m4.674s 00:10:51.966 user 0m4.155s 00:10:51.966 sys 0m0.335s 00:10:51.966 21:33:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:51.966 21:33:12 -- common/autotest_common.sh@10 -- # set +x 00:10:51.966 ************************************ 00:10:51.966 END TEST accel_comp 00:10:51.966 ************************************ 00:10:51.966 21:33:12 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.966 21:33:12 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:51.966 21:33:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:51.966 21:33:12 -- common/autotest_common.sh@10 -- # set +x 00:10:51.966 ************************************ 00:10:51.966 START TEST accel_decomp 00:10:51.966 ************************************ 00:10:51.966 21:33:12 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.966 21:33:12 -- accel/accel.sh@16 -- # local accel_opc 00:10:51.966 21:33:12 -- accel/accel.sh@17 -- # local accel_module 00:10:51.966 21:33:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.966 21:33:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:51.966 21:33:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:51.966 21:33:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:51.966 21:33:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:51.966 21:33:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:51.966 21:33:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:51.966 21:33:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:51.966 21:33:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:51.966 21:33:12 -- accel/accel.sh@42 -- # jq -r . 00:10:51.966 [2024-12-06 21:33:12.165926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:51.966 [2024-12-06 21:33:12.166072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64589 ] 00:10:51.966 [2024-12-06 21:33:12.322834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.225 [2024-12-06 21:33:12.500786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.126 21:33:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:54.126 00:10:54.126 SPDK Configuration: 00:10:54.126 Core mask: 0x1 00:10:54.126 00:10:54.126 Accel Perf Configuration: 00:10:54.126 Workload Type: decompress 00:10:54.126 Transfer size: 4096 bytes 00:10:54.126 Vector count 1 00:10:54.126 Module: software 00:10:54.126 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.126 Queue depth: 32 00:10:54.126 Allocate depth: 32 00:10:54.126 # threads/core: 1 00:10:54.126 Run time: 1 seconds 00:10:54.126 Verify: Yes 00:10:54.126 00:10:54.126 Running for 1 seconds... 00:10:54.126 00:10:54.126 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:54.126 ------------------------------------------------------------------------------------ 00:10:54.126 0,0 64832/s 119 MiB/s 0 0 00:10:54.126 ==================================================================================== 00:10:54.126 Total 64832/s 253 MiB/s 0 0' 00:10:54.126 21:33:14 -- accel/accel.sh@20 -- # IFS=: 00:10:54.126 21:33:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:54.126 21:33:14 -- accel/accel.sh@20 -- # read -r var val 00:10:54.126 21:33:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:54.126 21:33:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:54.126 21:33:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:54.126 21:33:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:54.126 21:33:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:54.126 21:33:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:54.126 21:33:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:54.126 21:33:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:54.126 21:33:14 -- accel/accel.sh@42 -- # jq -r . 00:10:54.126 [2024-12-06 21:33:14.508046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:54.126 [2024-12-06 21:33:14.508258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64621 ] 00:10:54.385 [2024-12-06 21:33:14.678111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.385 [2024-12-06 21:33:14.843684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.643 21:33:14 -- accel/accel.sh@21 -- # val= 00:10:54.643 21:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.643 21:33:14 -- accel/accel.sh@20 -- # IFS=: 00:10:54.643 21:33:14 -- accel/accel.sh@20 -- # read -r var val 00:10:54.643 21:33:14 -- accel/accel.sh@21 -- # val= 00:10:54.643 21:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.643 21:33:14 -- accel/accel.sh@20 -- # IFS=: 00:10:54.643 21:33:14 -- accel/accel.sh@20 -- # read -r var val 00:10:54.643 21:33:14 -- accel/accel.sh@21 -- # val= 00:10:54.643 21:33:14 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.643 21:33:14 -- accel/accel.sh@20 -- # IFS=: 00:10:54.643 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.643 21:33:15 -- accel/accel.sh@21 -- # val=0x1 00:10:54.643 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.643 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.643 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val= 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val= 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=decompress 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val= 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=software 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=32 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=32 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=1 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val=Yes 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val= 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:54.644 21:33:15 -- accel/accel.sh@21 -- # val= 00:10:54.644 21:33:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # IFS=: 00:10:54.644 21:33:15 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@21 -- # val= 00:10:56.547 21:33:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # IFS=: 00:10:56.547 21:33:16 -- accel/accel.sh@20 -- # read -r var val 00:10:56.547 21:33:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:56.547 21:33:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:56.547 21:33:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:56.547 00:10:56.547 real 0m4.645s 00:10:56.547 user 0m4.147s 00:10:56.547 sys 0m0.314s 00:10:56.547 21:33:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:56.547 ************************************ 00:10:56.547 END TEST accel_decomp 00:10:56.547 ************************************ 00:10:56.547 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:56.547 21:33:16 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.547 21:33:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:10:56.547 21:33:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:56.547 21:33:16 -- common/autotest_common.sh@10 -- # set +x 00:10:56.547 ************************************ 00:10:56.547 START TEST accel_decmop_full 00:10:56.547 ************************************ 00:10:56.547 21:33:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.547 21:33:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:56.547 21:33:16 -- accel/accel.sh@17 -- # local accel_module 00:10:56.547 21:33:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.547 21:33:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:56.547 21:33:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:56.547 21:33:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:56.547 21:33:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:56.547 21:33:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:56.547 21:33:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:56.547 21:33:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:56.547 21:33:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:56.547 21:33:16 -- accel/accel.sh@42 -- # jq -r . 00:10:56.547 [2024-12-06 21:33:16.867541] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:56.547 [2024-12-06 21:33:16.867701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64666 ] 00:10:56.547 [2024-12-06 21:33:17.038654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.807 [2024-12-06 21:33:17.205830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.709 21:33:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:58.709 00:10:58.709 SPDK Configuration: 00:10:58.709 Core mask: 0x1 00:10:58.709 00:10:58.709 Accel Perf Configuration: 00:10:58.709 Workload Type: decompress 00:10:58.709 Transfer size: 111250 bytes 00:10:58.709 Vector count 1 00:10:58.709 Module: software 00:10:58.709 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:58.709 Queue depth: 32 00:10:58.709 Allocate depth: 32 00:10:58.709 # threads/core: 1 00:10:58.709 Run time: 1 seconds 00:10:58.709 Verify: Yes 00:10:58.709 00:10:58.709 Running for 1 seconds... 00:10:58.709 00:10:58.709 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:58.709 ------------------------------------------------------------------------------------ 00:10:58.709 0,0 4768/s 196 MiB/s 0 0 00:10:58.709 ==================================================================================== 00:10:58.709 Total 4768/s 505 MiB/s 0 0' 00:10:58.709 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:58.709 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:58.709 21:33:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:58.709 21:33:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:58.709 21:33:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:58.709 21:33:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:58.709 21:33:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:58.709 21:33:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:58.709 21:33:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:58.709 21:33:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:58.709 21:33:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:58.709 21:33:19 -- accel/accel.sh@42 -- # jq -r . 00:10:58.967 [2024-12-06 21:33:19.213933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:58.968 [2024-12-06 21:33:19.214096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64693 ] 00:10:58.968 [2024-12-06 21:33:19.382570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.226 [2024-12-06 21:33:19.547635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=0x1 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=decompress 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=software 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=32 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=32 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.226 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.226 21:33:19 -- accel/accel.sh@21 -- # val=1 00:10:59.226 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.227 21:33:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:59.227 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.227 21:33:19 -- accel/accel.sh@21 -- # val=Yes 00:10:59.227 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.227 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.227 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:10:59.227 21:33:19 -- accel/accel.sh@21 -- # val= 00:10:59.227 21:33:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # IFS=: 00:10:59.227 21:33:19 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@21 -- # val= 00:11:01.126 21:33:21 -- accel/accel.sh@22 -- # case "$var" in 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # IFS=: 00:11:01.126 21:33:21 -- accel/accel.sh@20 -- # read -r var val 00:11:01.126 21:33:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:01.126 21:33:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:01.126 21:33:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:01.126 00:11:01.126 real 0m4.679s 00:11:01.126 user 0m4.154s 00:11:01.126 sys 0m0.342s 00:11:01.126 21:33:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:01.126 21:33:21 -- common/autotest_common.sh@10 -- # set +x 00:11:01.126 ************************************ 00:11:01.126 END TEST accel_decmop_full 00:11:01.127 ************************************ 00:11:01.127 21:33:21 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:01.127 21:33:21 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:01.127 21:33:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.127 21:33:21 -- common/autotest_common.sh@10 -- # set +x 00:11:01.127 ************************************ 00:11:01.127 START TEST accel_decomp_mcore 00:11:01.127 ************************************ 00:11:01.127 21:33:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:01.127 21:33:21 -- accel/accel.sh@16 -- # local accel_opc 00:11:01.127 21:33:21 -- accel/accel.sh@17 -- # local accel_module 00:11:01.127 21:33:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:01.127 21:33:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:01.127 21:33:21 -- accel/accel.sh@12 -- # build_accel_config 00:11:01.127 21:33:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:01.127 21:33:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:01.127 21:33:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:01.127 21:33:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:01.127 21:33:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:01.127 21:33:21 -- accel/accel.sh@41 -- # local IFS=, 00:11:01.127 21:33:21 -- accel/accel.sh@42 -- # jq -r . 00:11:01.127 [2024-12-06 21:33:21.596468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:01.127 [2024-12-06 21:33:21.596635] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64740 ] 00:11:01.423 [2024-12-06 21:33:21.764219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:01.684 [2024-12-06 21:33:21.935536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.684 [2024-12-06 21:33:21.935751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.684 [2024-12-06 21:33:21.936101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.684 [2024-12-06 21:33:21.936109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.613 21:33:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:03.613 00:11:03.613 SPDK Configuration: 00:11:03.613 Core mask: 0xf 00:11:03.613 00:11:03.613 Accel Perf Configuration: 00:11:03.613 Workload Type: decompress 00:11:03.613 Transfer size: 4096 bytes 00:11:03.613 Vector count 1 00:11:03.613 Module: software 00:11:03.613 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:03.613 Queue depth: 32 00:11:03.613 Allocate depth: 32 00:11:03.613 # threads/core: 1 00:11:03.613 Run time: 1 seconds 00:11:03.613 Verify: Yes 00:11:03.613 00:11:03.613 Running for 1 seconds... 00:11:03.613 00:11:03.613 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:03.613 ------------------------------------------------------------------------------------ 00:11:03.613 0,0 55168/s 101 MiB/s 0 0 00:11:03.613 3,0 53376/s 98 MiB/s 0 0 00:11:03.613 2,0 54592/s 100 MiB/s 0 0 00:11:03.613 1,0 54912/s 101 MiB/s 0 0 00:11:03.613 ==================================================================================== 00:11:03.613 Total 218048/s 851 MiB/s 0 0' 00:11:03.613 21:33:23 -- accel/accel.sh@20 -- # IFS=: 00:11:03.613 21:33:23 -- accel/accel.sh@20 -- # read -r var val 00:11:03.613 21:33:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:03.613 21:33:23 -- accel/accel.sh@12 -- # build_accel_config 00:11:03.613 21:33:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:11:03.613 21:33:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:03.613 21:33:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:03.613 21:33:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:03.613 21:33:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:03.613 21:33:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:03.613 21:33:23 -- accel/accel.sh@41 -- # local IFS=, 00:11:03.613 21:33:23 -- accel/accel.sh@42 -- # jq -r . 00:11:03.613 [2024-12-06 21:33:23.980653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:03.613 [2024-12-06 21:33:23.980831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64769 ] 00:11:03.871 [2024-12-06 21:33:24.150877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.871 [2024-12-06 21:33:24.314656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.871 [2024-12-06 21:33:24.314797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.871 [2024-12-06 21:33:24.315094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.871 [2024-12-06 21:33:24.315095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=0xf 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=decompress 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=software 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@23 -- # accel_module=software 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=32 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=32 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=1 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val=Yes 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:04.129 21:33:24 -- accel/accel.sh@21 -- # val= 00:11:04.129 21:33:24 -- accel/accel.sh@22 -- # case "$var" in 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # IFS=: 00:11:04.129 21:33:24 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@21 -- # val= 00:11:06.023 21:33:26 -- accel/accel.sh@22 -- # case "$var" in 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # IFS=: 00:11:06.023 21:33:26 -- accel/accel.sh@20 -- # read -r var val 00:11:06.023 21:33:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:06.023 21:33:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:06.023 21:33:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:06.023 00:11:06.023 real 0m4.765s 00:11:06.023 user 0m14.024s 00:11:06.023 sys 0m0.393s 00:11:06.023 21:33:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:06.023 ************************************ 00:11:06.023 END TEST accel_decomp_mcore 00:11:06.023 ************************************ 00:11:06.023 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:11:06.023 21:33:26 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:06.023 21:33:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:06.023 21:33:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:06.023 21:33:26 -- common/autotest_common.sh@10 -- # set +x 00:11:06.023 ************************************ 00:11:06.023 START TEST accel_decomp_full_mcore 00:11:06.023 ************************************ 00:11:06.023 21:33:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:06.023 21:33:26 -- accel/accel.sh@16 -- # local accel_opc 00:11:06.023 21:33:26 -- accel/accel.sh@17 -- # local accel_module 00:11:06.023 21:33:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:06.023 21:33:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:06.023 21:33:26 -- accel/accel.sh@12 -- # build_accel_config 00:11:06.023 21:33:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:06.023 21:33:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:06.023 21:33:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:06.023 21:33:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:06.023 21:33:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:06.023 21:33:26 -- accel/accel.sh@41 -- # local IFS=, 00:11:06.023 21:33:26 -- accel/accel.sh@42 -- # jq -r . 00:11:06.023 [2024-12-06 21:33:26.412870] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:06.023 [2024-12-06 21:33:26.413036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64813 ] 00:11:06.281 [2024-12-06 21:33:26.579309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:06.281 [2024-12-06 21:33:26.758751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:06.281 [2024-12-06 21:33:26.758914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:06.281 [2024-12-06 21:33:26.758976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:06.281 [2024-12-06 21:33:26.759173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.807 21:33:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:08.807 00:11:08.807 SPDK Configuration: 00:11:08.807 Core mask: 0xf 00:11:08.807 00:11:08.807 Accel Perf Configuration: 00:11:08.807 Workload Type: decompress 00:11:08.807 Transfer size: 111250 bytes 00:11:08.807 Vector count 1 00:11:08.807 Module: software 00:11:08.807 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:08.807 Queue depth: 32 00:11:08.807 Allocate depth: 32 00:11:08.807 # threads/core: 1 00:11:08.807 Run time: 1 seconds 00:11:08.807 Verify: Yes 00:11:08.807 00:11:08.807 Running for 1 seconds... 00:11:08.807 00:11:08.807 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:08.807 ------------------------------------------------------------------------------------ 00:11:08.807 0,0 4480/s 185 MiB/s 0 0 00:11:08.807 3,0 4544/s 187 MiB/s 0 0 00:11:08.807 2,0 4512/s 186 MiB/s 0 0 00:11:08.807 1,0 4576/s 189 MiB/s 0 0 00:11:08.807 ==================================================================================== 00:11:08.807 Total 18112/s 1921 MiB/s 0 0' 00:11:08.807 21:33:28 -- accel/accel.sh@20 -- # IFS=: 00:11:08.807 21:33:28 -- accel/accel.sh@20 -- # read -r var val 00:11:08.807 21:33:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.807 21:33:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:11:08.807 21:33:28 -- accel/accel.sh@12 -- # build_accel_config 00:11:08.807 21:33:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:08.807 21:33:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:08.807 21:33:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:08.807 21:33:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:08.807 21:33:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:08.807 21:33:28 -- accel/accel.sh@41 -- # local IFS=, 00:11:08.807 21:33:28 -- accel/accel.sh@42 -- # jq -r . 00:11:08.807 [2024-12-06 21:33:28.827638] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:08.807 [2024-12-06 21:33:28.828435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64848 ] 00:11:08.807 [2024-12-06 21:33:29.000872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.807 [2024-12-06 21:33:29.174867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.807 [2024-12-06 21:33:29.175026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.807 [2024-12-06 21:33:29.175133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.807 [2024-12-06 21:33:29.175307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val=0xf 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.063 21:33:29 -- accel/accel.sh@21 -- # val=decompress 00:11:09.063 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.063 21:33:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.063 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=software 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@23 -- # accel_module=software 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=32 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=32 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=1 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val=Yes 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:09.064 21:33:29 -- accel/accel.sh@21 -- # val= 00:11:09.064 21:33:29 -- accel/accel.sh@22 -- # case "$var" in 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # IFS=: 00:11:09.064 21:33:29 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@21 -- # val= 00:11:10.961 21:33:31 -- accel/accel.sh@22 -- # case "$var" in 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # IFS=: 00:11:10.961 21:33:31 -- accel/accel.sh@20 -- # read -r var val 00:11:10.961 21:33:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:10.961 21:33:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:10.961 21:33:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:10.961 00:11:10.961 real 0m4.815s 00:11:10.961 user 0m14.159s 00:11:10.961 sys 0m0.395s 00:11:10.961 21:33:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:10.961 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:11:10.961 ************************************ 00:11:10.961 END TEST accel_decomp_full_mcore 00:11:10.961 ************************************ 00:11:10.961 21:33:31 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.961 21:33:31 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:11:10.961 21:33:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:10.961 21:33:31 -- common/autotest_common.sh@10 -- # set +x 00:11:10.961 ************************************ 00:11:10.961 START TEST accel_decomp_mthread 00:11:10.961 ************************************ 00:11:10.961 21:33:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.961 21:33:31 -- accel/accel.sh@16 -- # local accel_opc 00:11:10.961 21:33:31 -- accel/accel.sh@17 -- # local accel_module 00:11:10.961 21:33:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.961 21:33:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:10.961 21:33:31 -- accel/accel.sh@12 -- # build_accel_config 00:11:10.961 21:33:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:10.961 21:33:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:10.961 21:33:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:10.961 21:33:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:10.961 21:33:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:10.961 21:33:31 -- accel/accel.sh@41 -- # local IFS=, 00:11:10.961 21:33:31 -- accel/accel.sh@42 -- # jq -r . 00:11:10.961 [2024-12-06 21:33:31.269090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:10.961 [2024-12-06 21:33:31.269237] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64897 ] 00:11:10.961 [2024-12-06 21:33:31.426181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.218 [2024-12-06 21:33:31.597823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.117 21:33:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:13.117 00:11:13.117 SPDK Configuration: 00:11:13.117 Core mask: 0x1 00:11:13.117 00:11:13.117 Accel Perf Configuration: 00:11:13.117 Workload Type: decompress 00:11:13.117 Transfer size: 4096 bytes 00:11:13.117 Vector count 1 00:11:13.117 Module: software 00:11:13.117 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:13.117 Queue depth: 32 00:11:13.117 Allocate depth: 32 00:11:13.117 # threads/core: 2 00:11:13.117 Run time: 1 seconds 00:11:13.117 Verify: Yes 00:11:13.117 00:11:13.117 Running for 1 seconds... 00:11:13.117 00:11:13.117 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:13.117 ------------------------------------------------------------------------------------ 00:11:13.117 0,1 33120/s 61 MiB/s 0 0 00:11:13.117 0,0 32992/s 60 MiB/s 0 0 00:11:13.117 ==================================================================================== 00:11:13.117 Total 66112/s 258 MiB/s 0 0' 00:11:13.117 21:33:33 -- accel/accel.sh@20 -- # IFS=: 00:11:13.117 21:33:33 -- accel/accel.sh@20 -- # read -r var val 00:11:13.117 21:33:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:13.117 21:33:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:13.117 21:33:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:11:13.117 21:33:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:13.117 21:33:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:13.117 21:33:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:13.117 21:33:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:13.117 21:33:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:13.117 21:33:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:13.117 21:33:33 -- accel/accel.sh@42 -- # jq -r . 00:11:13.117 [2024-12-06 21:33:33.590387] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:13.117 [2024-12-06 21:33:33.590557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64923 ] 00:11:13.375 [2024-12-06 21:33:33.758504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.634 [2024-12-06 21:33:33.913574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=0x1 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=decompress 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=software 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@23 -- # accel_module=software 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=32 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=32 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=2 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val=Yes 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:13.634 21:33:34 -- accel/accel.sh@21 -- # val= 00:11:13.634 21:33:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # IFS=: 00:11:13.634 21:33:34 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 21:33:35 -- accel/accel.sh@21 -- # val= 00:11:15.538 21:33:35 -- accel/accel.sh@22 -- # case "$var" in 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # IFS=: 00:11:15.538 21:33:35 -- accel/accel.sh@20 -- # read -r var val 00:11:15.538 ************************************ 00:11:15.538 END TEST accel_decomp_mthread 00:11:15.538 ************************************ 00:11:15.538 21:33:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:15.538 21:33:35 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:15.538 21:33:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:15.538 00:11:15.538 real 0m4.633s 00:11:15.538 user 0m4.105s 00:11:15.538 sys 0m0.344s 00:11:15.538 21:33:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:15.538 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:11:15.538 21:33:35 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.538 21:33:35 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:11:15.538 21:33:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.538 21:33:35 -- common/autotest_common.sh@10 -- # set +x 00:11:15.538 ************************************ 00:11:15.538 START TEST accel_deomp_full_mthread 00:11:15.538 ************************************ 00:11:15.538 21:33:35 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.538 21:33:35 -- accel/accel.sh@16 -- # local accel_opc 00:11:15.538 21:33:35 -- accel/accel.sh@17 -- # local accel_module 00:11:15.538 21:33:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.538 21:33:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:15.538 21:33:35 -- accel/accel.sh@12 -- # build_accel_config 00:11:15.538 21:33:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:15.538 21:33:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:15.538 21:33:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:15.538 21:33:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:15.538 21:33:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:15.538 21:33:35 -- accel/accel.sh@41 -- # local IFS=, 00:11:15.538 21:33:35 -- accel/accel.sh@42 -- # jq -r . 00:11:15.538 [2024-12-06 21:33:35.952364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:15.538 [2024-12-06 21:33:35.952829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64970 ] 00:11:15.796 [2024-12-06 21:33:36.104021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.796 [2024-12-06 21:33:36.270186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.328 21:33:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:18.328 00:11:18.328 SPDK Configuration: 00:11:18.328 Core mask: 0x1 00:11:18.328 00:11:18.328 Accel Perf Configuration: 00:11:18.328 Workload Type: decompress 00:11:18.328 Transfer size: 111250 bytes 00:11:18.328 Vector count 1 00:11:18.328 Module: software 00:11:18.328 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.328 Queue depth: 32 00:11:18.328 Allocate depth: 32 00:11:18.328 # threads/core: 2 00:11:18.328 Run time: 1 seconds 00:11:18.328 Verify: Yes 00:11:18.328 00:11:18.328 Running for 1 seconds... 00:11:18.328 00:11:18.328 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:18.328 ------------------------------------------------------------------------------------ 00:11:18.328 0,1 2432/s 100 MiB/s 0 0 00:11:18.328 0,0 2400/s 99 MiB/s 0 0 00:11:18.328 ==================================================================================== 00:11:18.328 Total 4832/s 512 MiB/s 0 0' 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:18.328 21:33:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:18.328 21:33:38 -- accel/accel.sh@12 -- # build_accel_config 00:11:18.328 21:33:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:18.328 21:33:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:18.328 21:33:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:18.328 21:33:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:18.328 21:33:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:18.328 21:33:38 -- accel/accel.sh@41 -- # local IFS=, 00:11:18.328 21:33:38 -- accel/accel.sh@42 -- # jq -r . 00:11:18.328 [2024-12-06 21:33:38.298960] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:18.328 [2024-12-06 21:33:38.299492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64996 ] 00:11:18.328 [2024-12-06 21:33:38.469430] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.328 [2024-12-06 21:33:38.625747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=0x1 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=decompress 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=software 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@23 -- # accel_module=software 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=32 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=32 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=2 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val=Yes 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.328 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.328 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.328 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.329 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:18.329 21:33:38 -- accel/accel.sh@21 -- # val= 00:11:18.329 21:33:38 -- accel/accel.sh@22 -- # case "$var" in 00:11:18.329 21:33:38 -- accel/accel.sh@20 -- # IFS=: 00:11:18.329 21:33:38 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@21 -- # val= 00:11:20.280 21:33:40 -- accel/accel.sh@22 -- # case "$var" in 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # IFS=: 00:11:20.280 21:33:40 -- accel/accel.sh@20 -- # read -r var val 00:11:20.280 21:33:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:20.280 21:33:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:20.280 21:33:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:20.280 00:11:20.280 real 0m4.691s 00:11:20.280 user 0m4.225s 00:11:20.280 sys 0m0.280s 00:11:20.280 21:33:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:20.280 ************************************ 00:11:20.280 END TEST accel_deomp_full_mthread 00:11:20.280 ************************************ 00:11:20.280 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:11:20.280 21:33:40 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:20.280 21:33:40 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:20.280 21:33:40 -- accel/accel.sh@129 -- # build_accel_config 00:11:20.280 21:33:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:20.280 21:33:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:20.280 21:33:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:20.280 21:33:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:20.280 21:33:40 -- common/autotest_common.sh@10 -- # set +x 00:11:20.280 21:33:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:20.280 21:33:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:20.280 21:33:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:20.280 21:33:40 -- accel/accel.sh@41 -- # local IFS=, 00:11:20.280 21:33:40 -- accel/accel.sh@42 -- # jq -r . 00:11:20.280 ************************************ 00:11:20.280 START TEST accel_dif_functional_tests 00:11:20.280 ************************************ 00:11:20.280 21:33:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:20.280 [2024-12-06 21:33:40.719580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:20.280 [2024-12-06 21:33:40.719713] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65038 ] 00:11:20.538 [2024-12-06 21:33:40.874163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.797 [2024-12-06 21:33:41.046744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.797 [2024-12-06 21:33:41.046846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.797 [2024-12-06 21:33:41.046856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.055 00:11:21.055 00:11:21.055 CUnit - A unit testing framework for C - Version 2.1-3 00:11:21.055 http://cunit.sourceforge.net/ 00:11:21.055 00:11:21.055 00:11:21.055 Suite: accel_dif 00:11:21.055 Test: verify: DIF generated, GUARD check ...passed 00:11:21.055 Test: verify: DIF generated, APPTAG check ...passed 00:11:21.055 Test: verify: DIF generated, REFTAG check ...passed 00:11:21.055 Test: verify: DIF not generated, GUARD check ...[2024-12-06 21:33:41.309368] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:21.055 [2024-12-06 21:33:41.309573] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:21.055 passed 00:11:21.055 Test: verify: DIF not generated, APPTAG check ...[2024-12-06 21:33:41.309675] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:21.055 passed 00:11:21.055 Test: verify: DIF not generated, REFTAG check ...[2024-12-06 21:33:41.309830] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:21.055 passed 00:11:21.055 Test: verify: APPTAG correct, APPTAG check ...[2024-12-06 21:33:41.309889] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:21.055 [2024-12-06 21:33:41.309923] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:21.055 passed 00:11:21.055 Test: verify: APPTAG incorrect, APPTAG check ...[2024-12-06 21:33:41.310197] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:21.055 passed 00:11:21.055 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:21.055 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:21.055 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:21.055 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-12-06 21:33:41.310658] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:21.055 passed 00:11:21.055 Test: generate copy: DIF generated, GUARD check ...passed 00:11:21.055 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:21.055 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:21.055 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:21.055 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:21.055 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:21.055 Test: generate copy: iovecs-len validate ...[2024-12-06 21:33:41.311675] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:21.055 passed 00:11:21.055 Test: generate copy: buffer alignment validate ...passed 00:11:21.055 00:11:21.055 Run Summary: Type Total Ran Passed Failed Inactive 00:11:21.055 suites 1 1 n/a 0 0 00:11:21.055 tests 20 20 20 0 0 00:11:21.055 asserts 204 204 204 0 n/a 00:11:21.055 00:11:21.055 Elapsed time = 0.007 seconds 00:11:21.989 ************************************ 00:11:21.989 END TEST accel_dif_functional_tests 00:11:21.989 00:11:21.989 real 0m1.687s 00:11:21.989 user 0m3.168s 00:11:21.989 sys 0m0.227s 00:11:21.989 21:33:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.989 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:11:21.989 ************************************ 00:11:21.989 00:11:21.989 real 1m44.364s 00:11:21.989 user 1m53.767s 00:11:21.989 sys 0m8.738s 00:11:21.989 21:33:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:21.989 ************************************ 00:11:21.989 END TEST accel 00:11:21.989 ************************************ 00:11:21.989 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:11:21.989 21:33:42 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:21.989 21:33:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:21.989 21:33:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.989 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:11:21.989 ************************************ 00:11:21.989 START TEST accel_rpc 00:11:21.989 ************************************ 00:11:21.989 21:33:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:22.247 * Looking for test storage... 00:11:22.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:22.247 21:33:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:22.247 21:33:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:22.247 21:33:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:22.247 21:33:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:22.247 21:33:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:22.247 21:33:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:22.247 21:33:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:22.247 21:33:42 -- scripts/common.sh@335 -- # IFS=.-: 00:11:22.247 21:33:42 -- scripts/common.sh@335 -- # read -ra ver1 00:11:22.247 21:33:42 -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.247 21:33:42 -- scripts/common.sh@336 -- # read -ra ver2 00:11:22.247 21:33:42 -- scripts/common.sh@337 -- # local 'op=<' 00:11:22.247 21:33:42 -- scripts/common.sh@339 -- # ver1_l=2 00:11:22.247 21:33:42 -- scripts/common.sh@340 -- # ver2_l=1 00:11:22.247 21:33:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:22.247 21:33:42 -- scripts/common.sh@343 -- # case "$op" in 00:11:22.247 21:33:42 -- scripts/common.sh@344 -- # : 1 00:11:22.247 21:33:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:22.247 21:33:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.247 21:33:42 -- scripts/common.sh@364 -- # decimal 1 00:11:22.247 21:33:42 -- scripts/common.sh@352 -- # local d=1 00:11:22.247 21:33:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.247 21:33:42 -- scripts/common.sh@354 -- # echo 1 00:11:22.247 21:33:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:22.247 21:33:42 -- scripts/common.sh@365 -- # decimal 2 00:11:22.247 21:33:42 -- scripts/common.sh@352 -- # local d=2 00:11:22.247 21:33:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.247 21:33:42 -- scripts/common.sh@354 -- # echo 2 00:11:22.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.247 21:33:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:22.247 21:33:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:22.247 21:33:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:22.247 21:33:42 -- scripts/common.sh@367 -- # return 0 00:11:22.247 21:33:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.247 21:33:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.247 21:33:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:22.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.247 --rc genhtml_branch_coverage=1 00:11:22.247 --rc genhtml_function_coverage=1 00:11:22.247 --rc genhtml_legend=1 00:11:22.247 --rc geninfo_all_blocks=1 00:11:22.247 --rc geninfo_unexecuted_blocks=1 00:11:22.247 00:11:22.247 ' 00:11:22.248 21:33:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 21:33:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:22.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.248 --rc genhtml_branch_coverage=1 00:11:22.248 --rc genhtml_function_coverage=1 00:11:22.248 --rc genhtml_legend=1 00:11:22.248 --rc geninfo_all_blocks=1 00:11:22.248 --rc geninfo_unexecuted_blocks=1 00:11:22.248 00:11:22.248 ' 00:11:22.248 21:33:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:22.248 21:33:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=65127 00:11:22.248 21:33:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 65127 00:11:22.248 21:33:42 -- common/autotest_common.sh@829 -- # '[' -z 65127 ']' 00:11:22.248 21:33:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.248 21:33:42 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:22.248 21:33:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.248 21:33:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.248 21:33:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.248 21:33:42 -- common/autotest_common.sh@10 -- # set +x 00:11:22.248 [2024-12-06 21:33:42.683517] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:22.248 [2024-12-06 21:33:42.683685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65127 ] 00:11:22.506 [2024-12-06 21:33:42.850061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:22.764 [2024-12-06 21:33:43.013898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:22.764 [2024-12-06 21:33:43.014400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.331 21:33:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.331 21:33:43 -- common/autotest_common.sh@862 -- # return 0 00:11:23.331 21:33:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:23.331 21:33:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:23.331 21:33:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:23.331 21:33:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:23.331 21:33:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:23.332 21:33:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.332 21:33:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.332 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:11:23.332 ************************************ 00:11:23.332 START TEST accel_assign_opcode 00:11:23.332 ************************************ 00:11:23.332 21:33:43 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:11:23.332 21:33:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:23.332 21:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.332 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:11:23.332 [2024-12-06 21:33:43.563956] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:23.332 21:33:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.332 21:33:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:23.332 21:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.332 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:11:23.332 [2024-12-06 21:33:43.571857] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:23.332 21:33:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.332 21:33:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:23.332 21:33:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.332 21:33:43 -- common/autotest_common.sh@10 -- # set +x 00:11:23.900 21:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.900 21:33:44 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:23.900 21:33:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.900 21:33:44 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:23.900 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:11:23.900 21:33:44 -- accel/accel_rpc.sh@42 -- # grep software 00:11:23.900 21:33:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.900 software 00:11:23.900 ************************************ 00:11:23.900 END TEST accel_assign_opcode 00:11:23.900 ************************************ 00:11:23.900 00:11:23.900 real 0m0.629s 00:11:23.900 user 0m0.010s 00:11:23.900 sys 0m0.014s 00:11:23.900 21:33:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:23.900 21:33:44 -- common/autotest_common.sh@10 -- # set +x 00:11:23.900 21:33:44 -- accel/accel_rpc.sh@55 -- # killprocess 65127 00:11:23.900 21:33:44 -- common/autotest_common.sh@936 -- # '[' -z 65127 ']' 00:11:23.900 21:33:44 -- common/autotest_common.sh@940 -- # kill -0 65127 00:11:23.900 21:33:44 -- common/autotest_common.sh@941 -- # uname 00:11:23.900 21:33:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:23.900 21:33:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65127 00:11:23.900 killing process with pid 65127 00:11:23.900 21:33:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:23.900 21:33:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:23.900 21:33:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65127' 00:11:23.900 21:33:44 -- common/autotest_common.sh@955 -- # kill 65127 00:11:23.900 21:33:44 -- common/autotest_common.sh@960 -- # wait 65127 00:11:25.800 00:11:25.800 real 0m3.708s 00:11:25.800 user 0m3.616s 00:11:25.800 sys 0m0.519s 00:11:25.800 21:33:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:25.800 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:11:25.800 ************************************ 00:11:25.800 END TEST accel_rpc 00:11:25.800 ************************************ 00:11:25.800 21:33:46 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:25.800 21:33:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:25.800 21:33:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.800 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:11:25.800 ************************************ 00:11:25.800 START TEST app_cmdline 00:11:25.800 ************************************ 00:11:25.800 21:33:46 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:25.800 * Looking for test storage... 00:11:25.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:25.800 21:33:46 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:25.800 21:33:46 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:25.800 21:33:46 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:26.057 21:33:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:26.057 21:33:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:26.057 21:33:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:26.058 21:33:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:26.058 21:33:46 -- scripts/common.sh@335 -- # IFS=.-: 00:11:26.058 21:33:46 -- scripts/common.sh@335 -- # read -ra ver1 00:11:26.058 21:33:46 -- scripts/common.sh@336 -- # IFS=.-: 00:11:26.058 21:33:46 -- scripts/common.sh@336 -- # read -ra ver2 00:11:26.058 21:33:46 -- scripts/common.sh@337 -- # local 'op=<' 00:11:26.058 21:33:46 -- scripts/common.sh@339 -- # ver1_l=2 00:11:26.058 21:33:46 -- scripts/common.sh@340 -- # ver2_l=1 00:11:26.058 21:33:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:26.058 21:33:46 -- scripts/common.sh@343 -- # case "$op" in 00:11:26.058 21:33:46 -- scripts/common.sh@344 -- # : 1 00:11:26.058 21:33:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:26.058 21:33:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:26.058 21:33:46 -- scripts/common.sh@364 -- # decimal 1 00:11:26.058 21:33:46 -- scripts/common.sh@352 -- # local d=1 00:11:26.058 21:33:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:26.058 21:33:46 -- scripts/common.sh@354 -- # echo 1 00:11:26.058 21:33:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:26.058 21:33:46 -- scripts/common.sh@365 -- # decimal 2 00:11:26.058 21:33:46 -- scripts/common.sh@352 -- # local d=2 00:11:26.058 21:33:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:26.058 21:33:46 -- scripts/common.sh@354 -- # echo 2 00:11:26.058 21:33:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:26.058 21:33:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:26.058 21:33:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:26.058 21:33:46 -- scripts/common.sh@367 -- # return 0 00:11:26.058 21:33:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:26.058 21:33:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.058 --rc genhtml_branch_coverage=1 00:11:26.058 --rc genhtml_function_coverage=1 00:11:26.058 --rc genhtml_legend=1 00:11:26.058 --rc geninfo_all_blocks=1 00:11:26.058 --rc geninfo_unexecuted_blocks=1 00:11:26.058 00:11:26.058 ' 00:11:26.058 21:33:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.058 --rc genhtml_branch_coverage=1 00:11:26.058 --rc genhtml_function_coverage=1 00:11:26.058 --rc genhtml_legend=1 00:11:26.058 --rc geninfo_all_blocks=1 00:11:26.058 --rc geninfo_unexecuted_blocks=1 00:11:26.058 00:11:26.058 ' 00:11:26.058 21:33:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.058 --rc genhtml_branch_coverage=1 00:11:26.058 --rc genhtml_function_coverage=1 00:11:26.058 --rc genhtml_legend=1 00:11:26.058 --rc geninfo_all_blocks=1 00:11:26.058 --rc geninfo_unexecuted_blocks=1 00:11:26.058 00:11:26.058 ' 00:11:26.058 21:33:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:26.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:26.058 --rc genhtml_branch_coverage=1 00:11:26.058 --rc genhtml_function_coverage=1 00:11:26.058 --rc genhtml_legend=1 00:11:26.058 --rc geninfo_all_blocks=1 00:11:26.058 --rc geninfo_unexecuted_blocks=1 00:11:26.058 00:11:26.058 ' 00:11:26.058 21:33:46 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:26.058 21:33:46 -- app/cmdline.sh@17 -- # spdk_tgt_pid=65245 00:11:26.058 21:33:46 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:26.058 21:33:46 -- app/cmdline.sh@18 -- # waitforlisten 65245 00:11:26.058 21:33:46 -- common/autotest_common.sh@829 -- # '[' -z 65245 ']' 00:11:26.058 21:33:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.058 21:33:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.058 21:33:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.058 21:33:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.058 21:33:46 -- common/autotest_common.sh@10 -- # set +x 00:11:26.058 [2024-12-06 21:33:46.423602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:26.058 [2024-12-06 21:33:46.423774] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65245 ] 00:11:26.316 [2024-12-06 21:33:46.590429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.316 [2024-12-06 21:33:46.760521] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:26.316 [2024-12-06 21:33:46.760765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.693 21:33:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.693 21:33:48 -- common/autotest_common.sh@862 -- # return 0 00:11:27.693 21:33:48 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:27.950 { 00:11:27.950 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:11:27.950 "fields": { 00:11:27.950 "major": 24, 00:11:27.950 "minor": 1, 00:11:27.950 "patch": 1, 00:11:27.950 "suffix": "-pre", 00:11:27.950 "commit": "c13c99a5e" 00:11:27.950 } 00:11:27.950 } 00:11:27.950 21:33:48 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:27.950 21:33:48 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:27.950 21:33:48 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:27.950 21:33:48 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:27.950 21:33:48 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:27.950 21:33:48 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.950 21:33:48 -- common/autotest_common.sh@10 -- # set +x 00:11:27.950 21:33:48 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:27.950 21:33:48 -- app/cmdline.sh@26 -- # sort 00:11:27.950 21:33:48 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.950 21:33:48 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:27.951 21:33:48 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:27.951 21:33:48 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.951 21:33:48 -- common/autotest_common.sh@650 -- # local es=0 00:11:27.951 21:33:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.951 21:33:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.951 21:33:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.951 21:33:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.951 21:33:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.951 21:33:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.951 21:33:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.951 21:33:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:27.951 21:33:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:27.951 21:33:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:28.208 request: 00:11:28.208 { 00:11:28.208 "method": "env_dpdk_get_mem_stats", 00:11:28.208 "req_id": 1 00:11:28.208 } 00:11:28.208 Got JSON-RPC error response 00:11:28.208 response: 00:11:28.208 { 00:11:28.208 "code": -32601, 00:11:28.208 "message": "Method not found" 00:11:28.208 } 00:11:28.208 21:33:48 -- common/autotest_common.sh@653 -- # es=1 00:11:28.208 21:33:48 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.208 21:33:48 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.208 21:33:48 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.208 21:33:48 -- app/cmdline.sh@1 -- # killprocess 65245 00:11:28.208 21:33:48 -- common/autotest_common.sh@936 -- # '[' -z 65245 ']' 00:11:28.208 21:33:48 -- common/autotest_common.sh@940 -- # kill -0 65245 00:11:28.208 21:33:48 -- common/autotest_common.sh@941 -- # uname 00:11:28.208 21:33:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.208 21:33:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65245 00:11:28.208 21:33:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.208 21:33:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.208 killing process with pid 65245 00:11:28.208 21:33:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65245' 00:11:28.208 21:33:48 -- common/autotest_common.sh@955 -- # kill 65245 00:11:28.208 21:33:48 -- common/autotest_common.sh@960 -- # wait 65245 00:11:30.106 00:11:30.106 real 0m4.238s 00:11:30.106 user 0m4.762s 00:11:30.106 sys 0m0.554s 00:11:30.106 21:33:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.106 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:11:30.106 ************************************ 00:11:30.106 END TEST app_cmdline 00:11:30.106 ************************************ 00:11:30.106 21:33:50 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:30.106 21:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.106 21:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.106 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:11:30.106 ************************************ 00:11:30.106 START TEST version 00:11:30.106 ************************************ 00:11:30.106 21:33:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:30.106 * Looking for test storage... 00:11:30.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:30.106 21:33:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.106 21:33:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.106 21:33:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.365 21:33:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.365 21:33:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.365 21:33:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.365 21:33:50 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.365 21:33:50 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.365 21:33:50 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.365 21:33:50 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.365 21:33:50 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.365 21:33:50 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.365 21:33:50 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.365 21:33:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.365 21:33:50 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.365 21:33:50 -- scripts/common.sh@344 -- # : 1 00:11:30.365 21:33:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.365 21:33:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.365 21:33:50 -- scripts/common.sh@364 -- # decimal 1 00:11:30.365 21:33:50 -- scripts/common.sh@352 -- # local d=1 00:11:30.365 21:33:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.365 21:33:50 -- scripts/common.sh@354 -- # echo 1 00:11:30.365 21:33:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.365 21:33:50 -- scripts/common.sh@365 -- # decimal 2 00:11:30.365 21:33:50 -- scripts/common.sh@352 -- # local d=2 00:11:30.365 21:33:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.365 21:33:50 -- scripts/common.sh@354 -- # echo 2 00:11:30.365 21:33:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.365 21:33:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.365 21:33:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.365 21:33:50 -- scripts/common.sh@367 -- # return 0 00:11:30.365 21:33:50 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.365 --rc genhtml_branch_coverage=1 00:11:30.365 --rc genhtml_function_coverage=1 00:11:30.365 --rc genhtml_legend=1 00:11:30.365 --rc geninfo_all_blocks=1 00:11:30.365 --rc geninfo_unexecuted_blocks=1 00:11:30.365 00:11:30.365 ' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.365 --rc genhtml_branch_coverage=1 00:11:30.365 --rc genhtml_function_coverage=1 00:11:30.365 --rc genhtml_legend=1 00:11:30.365 --rc geninfo_all_blocks=1 00:11:30.365 --rc geninfo_unexecuted_blocks=1 00:11:30.365 00:11:30.365 ' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.365 --rc genhtml_branch_coverage=1 00:11:30.365 --rc genhtml_function_coverage=1 00:11:30.365 --rc genhtml_legend=1 00:11:30.365 --rc geninfo_all_blocks=1 00:11:30.365 --rc geninfo_unexecuted_blocks=1 00:11:30.365 00:11:30.365 ' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.365 --rc genhtml_branch_coverage=1 00:11:30.365 --rc genhtml_function_coverage=1 00:11:30.365 --rc genhtml_legend=1 00:11:30.365 --rc geninfo_all_blocks=1 00:11:30.365 --rc geninfo_unexecuted_blocks=1 00:11:30.365 00:11:30.365 ' 00:11:30.365 21:33:50 -- app/version.sh@17 -- # get_header_version major 00:11:30.365 21:33:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.365 21:33:50 -- app/version.sh@14 -- # cut -f2 00:11:30.365 21:33:50 -- app/version.sh@14 -- # tr -d '"' 00:11:30.365 21:33:50 -- app/version.sh@17 -- # major=24 00:11:30.365 21:33:50 -- app/version.sh@18 -- # get_header_version minor 00:11:30.365 21:33:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.365 21:33:50 -- app/version.sh@14 -- # cut -f2 00:11:30.365 21:33:50 -- app/version.sh@14 -- # tr -d '"' 00:11:30.365 21:33:50 -- app/version.sh@18 -- # minor=1 00:11:30.365 21:33:50 -- app/version.sh@19 -- # get_header_version patch 00:11:30.365 21:33:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.365 21:33:50 -- app/version.sh@14 -- # cut -f2 00:11:30.365 21:33:50 -- app/version.sh@14 -- # tr -d '"' 00:11:30.365 21:33:50 -- app/version.sh@19 -- # patch=1 00:11:30.365 21:33:50 -- app/version.sh@20 -- # get_header_version suffix 00:11:30.365 21:33:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:30.365 21:33:50 -- app/version.sh@14 -- # cut -f2 00:11:30.365 21:33:50 -- app/version.sh@14 -- # tr -d '"' 00:11:30.365 21:33:50 -- app/version.sh@20 -- # suffix=-pre 00:11:30.365 21:33:50 -- app/version.sh@22 -- # version=24.1 00:11:30.365 21:33:50 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:30.365 21:33:50 -- app/version.sh@25 -- # version=24.1.1 00:11:30.365 21:33:50 -- app/version.sh@28 -- # version=24.1.1rc0 00:11:30.365 21:33:50 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:30.365 21:33:50 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:30.365 21:33:50 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:11:30.365 21:33:50 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:11:30.365 00:11:30.365 real 0m0.256s 00:11:30.365 user 0m0.174s 00:11:30.365 sys 0m0.126s 00:11:30.365 21:33:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:30.365 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:11:30.365 ************************************ 00:11:30.365 END TEST version 00:11:30.365 ************************************ 00:11:30.365 21:33:50 -- spdk/autotest.sh@181 -- # '[' 1 -eq 1 ']' 00:11:30.365 21:33:50 -- spdk/autotest.sh@182 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:30.365 21:33:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:30.365 21:33:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:30.365 21:33:50 -- common/autotest_common.sh@10 -- # set +x 00:11:30.365 ************************************ 00:11:30.365 START TEST blockdev_general 00:11:30.365 ************************************ 00:11:30.365 21:33:50 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:30.624 * Looking for test storage... 00:11:30.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:30.624 21:33:50 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:30.624 21:33:50 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:30.624 21:33:50 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:30.624 21:33:50 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:30.624 21:33:50 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:30.624 21:33:50 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:30.624 21:33:50 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:30.624 21:33:50 -- scripts/common.sh@335 -- # IFS=.-: 00:11:30.624 21:33:50 -- scripts/common.sh@335 -- # read -ra ver1 00:11:30.624 21:33:50 -- scripts/common.sh@336 -- # IFS=.-: 00:11:30.624 21:33:50 -- scripts/common.sh@336 -- # read -ra ver2 00:11:30.624 21:33:50 -- scripts/common.sh@337 -- # local 'op=<' 00:11:30.624 21:33:50 -- scripts/common.sh@339 -- # ver1_l=2 00:11:30.624 21:33:50 -- scripts/common.sh@340 -- # ver2_l=1 00:11:30.624 21:33:50 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:30.624 21:33:50 -- scripts/common.sh@343 -- # case "$op" in 00:11:30.624 21:33:50 -- scripts/common.sh@344 -- # : 1 00:11:30.624 21:33:50 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:30.624 21:33:50 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:30.624 21:33:50 -- scripts/common.sh@364 -- # decimal 1 00:11:30.624 21:33:50 -- scripts/common.sh@352 -- # local d=1 00:11:30.624 21:33:50 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:30.624 21:33:50 -- scripts/common.sh@354 -- # echo 1 00:11:30.624 21:33:50 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:30.624 21:33:50 -- scripts/common.sh@365 -- # decimal 2 00:11:30.624 21:33:50 -- scripts/common.sh@352 -- # local d=2 00:11:30.624 21:33:50 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:30.624 21:33:50 -- scripts/common.sh@354 -- # echo 2 00:11:30.624 21:33:50 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:30.624 21:33:50 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:30.624 21:33:50 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:30.624 21:33:50 -- scripts/common.sh@367 -- # return 0 00:11:30.624 21:33:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:30.624 21:33:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.624 --rc genhtml_branch_coverage=1 00:11:30.624 --rc genhtml_function_coverage=1 00:11:30.624 --rc genhtml_legend=1 00:11:30.624 --rc geninfo_all_blocks=1 00:11:30.624 --rc geninfo_unexecuted_blocks=1 00:11:30.624 00:11:30.624 ' 00:11:30.624 21:33:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.624 --rc genhtml_branch_coverage=1 00:11:30.624 --rc genhtml_function_coverage=1 00:11:30.624 --rc genhtml_legend=1 00:11:30.624 --rc geninfo_all_blocks=1 00:11:30.624 --rc geninfo_unexecuted_blocks=1 00:11:30.624 00:11:30.624 ' 00:11:30.624 21:33:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.624 --rc genhtml_branch_coverage=1 00:11:30.624 --rc genhtml_function_coverage=1 00:11:30.624 --rc genhtml_legend=1 00:11:30.624 --rc geninfo_all_blocks=1 00:11:30.624 --rc geninfo_unexecuted_blocks=1 00:11:30.624 00:11:30.624 ' 00:11:30.624 21:33:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:30.624 --rc genhtml_branch_coverage=1 00:11:30.624 --rc genhtml_function_coverage=1 00:11:30.624 --rc genhtml_legend=1 00:11:30.624 --rc geninfo_all_blocks=1 00:11:30.624 --rc geninfo_unexecuted_blocks=1 00:11:30.624 00:11:30.624 ' 00:11:30.624 21:33:51 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:30.624 21:33:51 -- bdev/nbd_common.sh@6 -- # set -e 00:11:30.624 21:33:51 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:30.624 21:33:51 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:30.624 21:33:51 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:30.624 21:33:51 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:30.624 21:33:51 -- bdev/blockdev.sh@18 -- # : 00:11:30.624 21:33:51 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:30.624 21:33:51 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:30.624 21:33:51 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:30.624 21:33:51 -- bdev/blockdev.sh@672 -- # uname -s 00:11:30.624 21:33:51 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:30.624 21:33:51 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:30.624 21:33:51 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:30.624 21:33:51 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:30.624 21:33:51 -- bdev/blockdev.sh@682 -- # dek= 00:11:30.624 21:33:51 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:30.624 21:33:51 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:30.624 21:33:51 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:30.624 21:33:51 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:30.624 21:33:51 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:30.624 21:33:51 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:30.624 21:33:51 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=65432 00:11:30.624 21:33:51 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:30.624 21:33:51 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:30.624 21:33:51 -- bdev/blockdev.sh@47 -- # waitforlisten 65432 00:11:30.624 21:33:51 -- common/autotest_common.sh@829 -- # '[' -z 65432 ']' 00:11:30.624 21:33:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.624 21:33:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:30.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.624 21:33:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.624 21:33:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:30.624 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:11:30.624 [2024-12-06 21:33:51.085756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:30.624 [2024-12-06 21:33:51.085929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65432 ] 00:11:30.882 [2024-12-06 21:33:51.250299] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.140 [2024-12-06 21:33:51.436934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:31.140 [2024-12-06 21:33:51.437191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.707 21:33:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:31.707 21:33:51 -- common/autotest_common.sh@862 -- # return 0 00:11:31.707 21:33:51 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:31.707 21:33:51 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:31.707 21:33:51 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:31.707 21:33:51 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.707 21:33:51 -- common/autotest_common.sh@10 -- # set +x 00:11:32.275 [2024-12-06 21:33:52.615628] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:32.275 [2024-12-06 21:33:52.615707] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:32.275 00:11:32.275 [2024-12-06 21:33:52.623583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:32.275 [2024-12-06 21:33:52.623627] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:32.275 00:11:32.275 Malloc0 00:11:32.275 Malloc1 00:11:32.275 Malloc2 00:11:32.275 Malloc3 00:11:32.534 Malloc4 00:11:32.534 Malloc5 00:11:32.534 Malloc6 00:11:32.534 Malloc7 00:11:32.534 Malloc8 00:11:32.534 Malloc9 00:11:32.534 [2024-12-06 21:33:52.970175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:32.534 [2024-12-06 21:33:52.970251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.534 [2024-12-06 21:33:52.970281] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:11:32.534 [2024-12-06 21:33:52.970294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.534 [2024-12-06 21:33:52.972789] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.534 [2024-12-06 21:33:52.972859] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:32.534 TestPT 00:11:32.534 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.534 21:33:53 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:32.793 5000+0 records in 00:11:32.793 5000+0 records out 00:11:32.793 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0195198 s, 525 MB/s 00:11:32.793 21:33:53 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.793 AIO0 00:11:32.793 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.793 21:33:53 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.793 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.793 21:33:53 -- bdev/blockdev.sh@738 -- # cat 00:11:32.793 21:33:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.793 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.793 21:33:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.793 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.793 21:33:53 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:32.793 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.793 21:33:53 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:32.793 21:33:53 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:32.793 21:33:53 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:32.793 21:33:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.793 21:33:53 -- common/autotest_common.sh@10 -- # set +x 00:11:33.055 21:33:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.055 21:33:53 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:33.055 21:33:53 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:33.056 21:33:53 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fc561d1b-591c-4214-a1a9-152e584b27fa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc561d1b-591c-4214-a1a9-152e584b27fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "e9b73305-0166-5820-8e5d-33b96bc4e0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e9b73305-0166-5820-8e5d-33b96bc4e0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e915eaf-0d76-5f4f-80f3-746ed3490b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e915eaf-0d76-5f4f-80f3-746ed3490b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7de26293-dbab-587c-8868-9a4ae4b4b2fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de26293-dbab-587c-8868-9a4ae4b4b2fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ad4db46d-0563-5409-9ee3-6e1267d4c526"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ad4db46d-0563-5409-9ee3-6e1267d4c526",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8ca2f68c-0df3-5fa8-a45e-ac906d79b842"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ca2f68c-0df3-5fa8-a45e-ac906d79b842",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0a545c12-d22d-50d1-889c-02223b6ca173"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0a545c12-d22d-50d1-889c-02223b6ca173",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "50e42ff9-56cb-5c63-995e-4579a1cdc08d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50e42ff9-56cb-5c63-995e-4579a1cdc08d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "bd0e04ee-67fc-588d-8765-2f15ae0f8360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd0e04ee-67fc-588d-8765-2f15ae0f8360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0f393409-9e3b-5cc8-bf12-17e2e84563d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f393409-9e3b-5cc8-bf12-17e2e84563d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "36050b0c-6140-598f-976d-ab4dc39fc87b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36050b0c-6140-598f-976d-ab4dc39fc87b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0035ee72-8d1f-4ebf-bc26-cb9fb0eb23c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0672dca1-8865-4234-b229-0cdfed22c1eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "778bcae9-5468-4fc8-b7ce-f38f034ef686"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1ed11d1-ef58-40d8-ba63-1358bd2b16ae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "df5e07ab-09af-4062-b4fb-08ead138a5a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a1800cf4-baec-4f3b-9ac7-320c08782cc4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7996097a-9eb9-4252-a6ae-ed3b929a5f2a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77ecd56a-bdba-48e4-a269-cc7adc62017f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9efd786f-acbd-4708-95a9-cef962d2666c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9efd786f-acbd-4708-95a9-cef962d2666c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:11:33.056 21:33:53 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:33.056 21:33:53 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:33.056 21:33:53 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:33.056 21:33:53 -- bdev/blockdev.sh@752 -- # killprocess 65432 00:11:33.056 21:33:53 -- common/autotest_common.sh@936 -- # '[' -z 65432 ']' 00:11:33.056 21:33:53 -- common/autotest_common.sh@940 -- # kill -0 65432 00:11:33.056 21:33:53 -- common/autotest_common.sh@941 -- # uname 00:11:33.056 21:33:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.056 21:33:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65432 00:11:33.056 21:33:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:33.056 killing process with pid 65432 00:11:33.056 21:33:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:33.056 21:33:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65432' 00:11:33.056 21:33:53 -- common/autotest_common.sh@955 -- # kill 65432 00:11:33.056 21:33:53 -- common/autotest_common.sh@960 -- # wait 65432 00:11:35.587 21:33:56 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:35.587 21:33:56 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:35.587 21:33:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:11:35.587 21:33:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.587 21:33:56 -- common/autotest_common.sh@10 -- # set +x 00:11:35.587 ************************************ 00:11:35.587 START TEST bdev_hello_world 00:11:35.587 ************************************ 00:11:35.587 21:33:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:35.845 [2024-12-06 21:33:56.128758] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:35.845 [2024-12-06 21:33:56.128961] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65505 ] 00:11:35.845 [2024-12-06 21:33:56.298871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.109 [2024-12-06 21:33:56.460685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.368 [2024-12-06 21:33:56.782354] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.368 [2024-12-06 21:33:56.782452] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:36.368 [2024-12-06 21:33:56.790314] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.368 [2024-12-06 21:33:56.790374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:36.368 [2024-12-06 21:33:56.798332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.368 [2024-12-06 21:33:56.798387] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:36.368 [2024-12-06 21:33:56.798403] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:36.627 [2024-12-06 21:33:56.959350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:36.627 [2024-12-06 21:33:56.959429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.627 [2024-12-06 21:33:56.959463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:36.627 [2024-12-06 21:33:56.959483] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.627 [2024-12-06 21:33:56.961807] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.627 [2024-12-06 21:33:56.961874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:36.886 [2024-12-06 21:33:57.218601] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:36.886 [2024-12-06 21:33:57.218696] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:36.886 [2024-12-06 21:33:57.218740] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:36.886 [2024-12-06 21:33:57.218810] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:36.886 [2024-12-06 21:33:57.218885] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:36.886 [2024-12-06 21:33:57.218909] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:36.886 [2024-12-06 21:33:57.218959] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:36.886 00:11:36.886 [2024-12-06 21:33:57.218992] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:38.787 00:11:38.787 real 0m2.933s 00:11:38.787 user 0m2.486s 00:11:38.787 sys 0m0.320s 00:11:38.787 21:33:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:38.787 ************************************ 00:11:38.787 21:33:58 -- common/autotest_common.sh@10 -- # set +x 00:11:38.787 END TEST bdev_hello_world 00:11:38.787 ************************************ 00:11:38.787 21:33:59 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:38.787 21:33:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:38.787 21:33:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:38.787 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:11:38.787 ************************************ 00:11:38.787 START TEST bdev_bounds 00:11:38.787 ************************************ 00:11:38.787 21:33:59 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:11:38.787 21:33:59 -- bdev/blockdev.sh@288 -- # bdevio_pid=65558 00:11:38.787 21:33:59 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:38.787 21:33:59 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:38.787 Process bdevio pid: 65558 00:11:38.787 21:33:59 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 65558' 00:11:38.787 21:33:59 -- bdev/blockdev.sh@291 -- # waitforlisten 65558 00:11:38.787 21:33:59 -- common/autotest_common.sh@829 -- # '[' -z 65558 ']' 00:11:38.787 21:33:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.787 21:33:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:38.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.787 21:33:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.787 21:33:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:38.787 21:33:59 -- common/autotest_common.sh@10 -- # set +x 00:11:38.787 [2024-12-06 21:33:59.103281] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:38.787 [2024-12-06 21:33:59.103483] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65558 ] 00:11:38.787 [2024-12-06 21:33:59.275280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:39.044 [2024-12-06 21:33:59.441823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.044 [2024-12-06 21:33:59.441906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.044 [2024-12-06 21:33:59.441945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.319 [2024-12-06 21:33:59.767781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.319 [2024-12-06 21:33:59.767901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:39.319 [2024-12-06 21:33:59.775739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.319 [2024-12-06 21:33:59.775805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:39.319 [2024-12-06 21:33:59.783758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.319 [2024-12-06 21:33:59.783815] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:39.319 [2024-12-06 21:33:59.783842] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:39.593 [2024-12-06 21:33:59.954896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:39.593 [2024-12-06 21:33:59.954979] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.593 [2024-12-06 21:33:59.955013] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:39.593 [2024-12-06 21:33:59.955028] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.593 [2024-12-06 21:33:59.957921] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.593 [2024-12-06 21:33:59.957978] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:40.527 21:34:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.527 21:34:00 -- common/autotest_common.sh@862 -- # return 0 00:11:40.527 21:34:00 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:40.527 I/O targets: 00:11:40.527 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:40.527 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:40.527 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:40.527 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:40.527 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:40.527 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.527 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:40.527 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:40.527 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:40.527 00:11:40.527 00:11:40.527 CUnit - A unit testing framework for C - Version 2.1-3 00:11:40.527 http://cunit.sourceforge.net/ 00:11:40.527 00:11:40.527 00:11:40.527 Suite: bdevio tests on: AIO0 00:11:40.527 Test: blockdev write read block ...passed 00:11:40.527 Test: blockdev write zeroes read block ...passed 00:11:40.527 Test: blockdev write zeroes read no split ...passed 00:11:40.527 Test: blockdev write zeroes read split ...passed 00:11:40.527 Test: blockdev write zeroes read split partial ...passed 00:11:40.527 Test: blockdev reset ...passed 00:11:40.527 Test: blockdev write read 8 blocks ...passed 00:11:40.527 Test: blockdev write read size > 128k ...passed 00:11:40.527 Test: blockdev write read invalid size ...passed 00:11:40.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.527 Test: blockdev write read max offset ...passed 00:11:40.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.527 Test: blockdev writev readv 8 blocks ...passed 00:11:40.527 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.527 Test: blockdev writev readv block ...passed 00:11:40.527 Test: blockdev writev readv size > 128k ...passed 00:11:40.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.527 Test: blockdev comparev and writev ...passed 00:11:40.527 Test: blockdev nvme passthru rw ...passed 00:11:40.527 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.527 Test: blockdev nvme admin passthru ...passed 00:11:40.527 Test: blockdev copy ...passed 00:11:40.527 Suite: bdevio tests on: raid1 00:11:40.527 Test: blockdev write read block ...passed 00:11:40.527 Test: blockdev write zeroes read block ...passed 00:11:40.527 Test: blockdev write zeroes read no split ...passed 00:11:40.527 Test: blockdev write zeroes read split ...passed 00:11:40.527 Test: blockdev write zeroes read split partial ...passed 00:11:40.527 Test: blockdev reset ...passed 00:11:40.527 Test: blockdev write read 8 blocks ...passed 00:11:40.527 Test: blockdev write read size > 128k ...passed 00:11:40.527 Test: blockdev write read invalid size ...passed 00:11:40.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.527 Test: blockdev write read max offset ...passed 00:11:40.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.527 Test: blockdev writev readv 8 blocks ...passed 00:11:40.527 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.527 Test: blockdev writev readv block ...passed 00:11:40.527 Test: blockdev writev readv size > 128k ...passed 00:11:40.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.527 Test: blockdev comparev and writev ...passed 00:11:40.527 Test: blockdev nvme passthru rw ...passed 00:11:40.527 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.527 Test: blockdev nvme admin passthru ...passed 00:11:40.527 Test: blockdev copy ...passed 00:11:40.527 Suite: bdevio tests on: concat0 00:11:40.527 Test: blockdev write read block ...passed 00:11:40.527 Test: blockdev write zeroes read block ...passed 00:11:40.527 Test: blockdev write zeroes read no split ...passed 00:11:40.527 Test: blockdev write zeroes read split ...passed 00:11:40.527 Test: blockdev write zeroes read split partial ...passed 00:11:40.527 Test: blockdev reset ...passed 00:11:40.527 Test: blockdev write read 8 blocks ...passed 00:11:40.527 Test: blockdev write read size > 128k ...passed 00:11:40.527 Test: blockdev write read invalid size ...passed 00:11:40.527 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.527 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.527 Test: blockdev write read max offset ...passed 00:11:40.527 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.527 Test: blockdev writev readv 8 blocks ...passed 00:11:40.527 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.527 Test: blockdev writev readv block ...passed 00:11:40.527 Test: blockdev writev readv size > 128k ...passed 00:11:40.527 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.527 Test: blockdev comparev and writev ...passed 00:11:40.527 Test: blockdev nvme passthru rw ...passed 00:11:40.527 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.527 Test: blockdev nvme admin passthru ...passed 00:11:40.527 Test: blockdev copy ...passed 00:11:40.527 Suite: bdevio tests on: raid0 00:11:40.527 Test: blockdev write read block ...passed 00:11:40.527 Test: blockdev write zeroes read block ...passed 00:11:40.527 Test: blockdev write zeroes read no split ...passed 00:11:40.786 Test: blockdev write zeroes read split ...passed 00:11:40.786 Test: blockdev write zeroes read split partial ...passed 00:11:40.786 Test: blockdev reset ...passed 00:11:40.786 Test: blockdev write read 8 blocks ...passed 00:11:40.786 Test: blockdev write read size > 128k ...passed 00:11:40.786 Test: blockdev write read invalid size ...passed 00:11:40.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.786 Test: blockdev write read max offset ...passed 00:11:40.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.786 Test: blockdev writev readv 8 blocks ...passed 00:11:40.786 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.786 Test: blockdev writev readv block ...passed 00:11:40.786 Test: blockdev writev readv size > 128k ...passed 00:11:40.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.786 Test: blockdev comparev and writev ...passed 00:11:40.786 Test: blockdev nvme passthru rw ...passed 00:11:40.786 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.786 Test: blockdev nvme admin passthru ...passed 00:11:40.786 Test: blockdev copy ...passed 00:11:40.786 Suite: bdevio tests on: TestPT 00:11:40.786 Test: blockdev write read block ...passed 00:11:40.786 Test: blockdev write zeroes read block ...passed 00:11:40.786 Test: blockdev write zeroes read no split ...passed 00:11:40.786 Test: blockdev write zeroes read split ...passed 00:11:40.786 Test: blockdev write zeroes read split partial ...passed 00:11:40.786 Test: blockdev reset ...passed 00:11:40.786 Test: blockdev write read 8 blocks ...passed 00:11:40.786 Test: blockdev write read size > 128k ...passed 00:11:40.786 Test: blockdev write read invalid size ...passed 00:11:40.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.786 Test: blockdev write read max offset ...passed 00:11:40.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.786 Test: blockdev writev readv 8 blocks ...passed 00:11:40.786 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.786 Test: blockdev writev readv block ...passed 00:11:40.786 Test: blockdev writev readv size > 128k ...passed 00:11:40.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.786 Test: blockdev comparev and writev ...passed 00:11:40.786 Test: blockdev nvme passthru rw ...passed 00:11:40.786 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.786 Test: blockdev nvme admin passthru ...passed 00:11:40.786 Test: blockdev copy ...passed 00:11:40.786 Suite: bdevio tests on: Malloc2p7 00:11:40.786 Test: blockdev write read block ...passed 00:11:40.786 Test: blockdev write zeroes read block ...passed 00:11:40.786 Test: blockdev write zeroes read no split ...passed 00:11:40.786 Test: blockdev write zeroes read split ...passed 00:11:40.786 Test: blockdev write zeroes read split partial ...passed 00:11:40.786 Test: blockdev reset ...passed 00:11:40.786 Test: blockdev write read 8 blocks ...passed 00:11:40.786 Test: blockdev write read size > 128k ...passed 00:11:40.786 Test: blockdev write read invalid size ...passed 00:11:40.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.786 Test: blockdev write read max offset ...passed 00:11:40.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.786 Test: blockdev writev readv 8 blocks ...passed 00:11:40.786 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.786 Test: blockdev writev readv block ...passed 00:11:40.786 Test: blockdev writev readv size > 128k ...passed 00:11:40.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.786 Test: blockdev comparev and writev ...passed 00:11:40.786 Test: blockdev nvme passthru rw ...passed 00:11:40.786 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.786 Test: blockdev nvme admin passthru ...passed 00:11:40.786 Test: blockdev copy ...passed 00:11:40.786 Suite: bdevio tests on: Malloc2p6 00:11:40.786 Test: blockdev write read block ...passed 00:11:40.786 Test: blockdev write zeroes read block ...passed 00:11:40.786 Test: blockdev write zeroes read no split ...passed 00:11:40.786 Test: blockdev write zeroes read split ...passed 00:11:40.786 Test: blockdev write zeroes read split partial ...passed 00:11:40.786 Test: blockdev reset ...passed 00:11:40.786 Test: blockdev write read 8 blocks ...passed 00:11:40.786 Test: blockdev write read size > 128k ...passed 00:11:40.786 Test: blockdev write read invalid size ...passed 00:11:40.786 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:40.786 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:40.786 Test: blockdev write read max offset ...passed 00:11:40.786 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:40.786 Test: blockdev writev readv 8 blocks ...passed 00:11:40.786 Test: blockdev writev readv 30 x 1block ...passed 00:11:40.786 Test: blockdev writev readv block ...passed 00:11:40.786 Test: blockdev writev readv size > 128k ...passed 00:11:40.786 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:40.786 Test: blockdev comparev and writev ...passed 00:11:40.786 Test: blockdev nvme passthru rw ...passed 00:11:40.786 Test: blockdev nvme passthru vendor specific ...passed 00:11:40.786 Test: blockdev nvme admin passthru ...passed 00:11:40.786 Test: blockdev copy ...passed 00:11:40.786 Suite: bdevio tests on: Malloc2p5 00:11:40.786 Test: blockdev write read block ...passed 00:11:40.786 Test: blockdev write zeroes read block ...passed 00:11:40.786 Test: blockdev write zeroes read no split ...passed 00:11:41.044 Test: blockdev write zeroes read split ...passed 00:11:41.044 Test: blockdev write zeroes read split partial ...passed 00:11:41.044 Test: blockdev reset ...passed 00:11:41.044 Test: blockdev write read 8 blocks ...passed 00:11:41.044 Test: blockdev write read size > 128k ...passed 00:11:41.044 Test: blockdev write read invalid size ...passed 00:11:41.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.044 Test: blockdev write read max offset ...passed 00:11:41.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.044 Test: blockdev writev readv 8 blocks ...passed 00:11:41.044 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.044 Test: blockdev writev readv block ...passed 00:11:41.044 Test: blockdev writev readv size > 128k ...passed 00:11:41.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.044 Test: blockdev comparev and writev ...passed 00:11:41.044 Test: blockdev nvme passthru rw ...passed 00:11:41.044 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.044 Test: blockdev nvme admin passthru ...passed 00:11:41.044 Test: blockdev copy ...passed 00:11:41.044 Suite: bdevio tests on: Malloc2p4 00:11:41.044 Test: blockdev write read block ...passed 00:11:41.044 Test: blockdev write zeroes read block ...passed 00:11:41.044 Test: blockdev write zeroes read no split ...passed 00:11:41.044 Test: blockdev write zeroes read split ...passed 00:11:41.044 Test: blockdev write zeroes read split partial ...passed 00:11:41.044 Test: blockdev reset ...passed 00:11:41.044 Test: blockdev write read 8 blocks ...passed 00:11:41.044 Test: blockdev write read size > 128k ...passed 00:11:41.044 Test: blockdev write read invalid size ...passed 00:11:41.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.044 Test: blockdev write read max offset ...passed 00:11:41.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.044 Test: blockdev writev readv 8 blocks ...passed 00:11:41.044 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.044 Test: blockdev writev readv block ...passed 00:11:41.044 Test: blockdev writev readv size > 128k ...passed 00:11:41.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.044 Test: blockdev comparev and writev ...passed 00:11:41.044 Test: blockdev nvme passthru rw ...passed 00:11:41.044 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.044 Test: blockdev nvme admin passthru ...passed 00:11:41.044 Test: blockdev copy ...passed 00:11:41.044 Suite: bdevio tests on: Malloc2p3 00:11:41.044 Test: blockdev write read block ...passed 00:11:41.044 Test: blockdev write zeroes read block ...passed 00:11:41.044 Test: blockdev write zeroes read no split ...passed 00:11:41.044 Test: blockdev write zeroes read split ...passed 00:11:41.044 Test: blockdev write zeroes read split partial ...passed 00:11:41.044 Test: blockdev reset ...passed 00:11:41.044 Test: blockdev write read 8 blocks ...passed 00:11:41.044 Test: blockdev write read size > 128k ...passed 00:11:41.044 Test: blockdev write read invalid size ...passed 00:11:41.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.044 Test: blockdev write read max offset ...passed 00:11:41.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.044 Test: blockdev writev readv 8 blocks ...passed 00:11:41.044 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.044 Test: blockdev writev readv block ...passed 00:11:41.044 Test: blockdev writev readv size > 128k ...passed 00:11:41.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.044 Test: blockdev comparev and writev ...passed 00:11:41.044 Test: blockdev nvme passthru rw ...passed 00:11:41.044 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.045 Test: blockdev nvme admin passthru ...passed 00:11:41.045 Test: blockdev copy ...passed 00:11:41.045 Suite: bdevio tests on: Malloc2p2 00:11:41.045 Test: blockdev write read block ...passed 00:11:41.045 Test: blockdev write zeroes read block ...passed 00:11:41.045 Test: blockdev write zeroes read no split ...passed 00:11:41.045 Test: blockdev write zeroes read split ...passed 00:11:41.045 Test: blockdev write zeroes read split partial ...passed 00:11:41.045 Test: blockdev reset ...passed 00:11:41.045 Test: blockdev write read 8 blocks ...passed 00:11:41.045 Test: blockdev write read size > 128k ...passed 00:11:41.045 Test: blockdev write read invalid size ...passed 00:11:41.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.045 Test: blockdev write read max offset ...passed 00:11:41.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.045 Test: blockdev writev readv 8 blocks ...passed 00:11:41.045 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.045 Test: blockdev writev readv block ...passed 00:11:41.045 Test: blockdev writev readv size > 128k ...passed 00:11:41.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.045 Test: blockdev comparev and writev ...passed 00:11:41.045 Test: blockdev nvme passthru rw ...passed 00:11:41.045 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.045 Test: blockdev nvme admin passthru ...passed 00:11:41.045 Test: blockdev copy ...passed 00:11:41.045 Suite: bdevio tests on: Malloc2p1 00:11:41.045 Test: blockdev write read block ...passed 00:11:41.045 Test: blockdev write zeroes read block ...passed 00:11:41.045 Test: blockdev write zeroes read no split ...passed 00:11:41.045 Test: blockdev write zeroes read split ...passed 00:11:41.045 Test: blockdev write zeroes read split partial ...passed 00:11:41.045 Test: blockdev reset ...passed 00:11:41.045 Test: blockdev write read 8 blocks ...passed 00:11:41.045 Test: blockdev write read size > 128k ...passed 00:11:41.045 Test: blockdev write read invalid size ...passed 00:11:41.045 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.045 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.045 Test: blockdev write read max offset ...passed 00:11:41.045 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.045 Test: blockdev writev readv 8 blocks ...passed 00:11:41.045 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.045 Test: blockdev writev readv block ...passed 00:11:41.045 Test: blockdev writev readv size > 128k ...passed 00:11:41.045 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.045 Test: blockdev comparev and writev ...passed 00:11:41.045 Test: blockdev nvme passthru rw ...passed 00:11:41.045 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.045 Test: blockdev nvme admin passthru ...passed 00:11:41.045 Test: blockdev copy ...passed 00:11:41.045 Suite: bdevio tests on: Malloc2p0 00:11:41.045 Test: blockdev write read block ...passed 00:11:41.045 Test: blockdev write zeroes read block ...passed 00:11:41.045 Test: blockdev write zeroes read no split ...passed 00:11:41.303 Test: blockdev write zeroes read split ...passed 00:11:41.303 Test: blockdev write zeroes read split partial ...passed 00:11:41.303 Test: blockdev reset ...passed 00:11:41.303 Test: blockdev write read 8 blocks ...passed 00:11:41.303 Test: blockdev write read size > 128k ...passed 00:11:41.303 Test: blockdev write read invalid size ...passed 00:11:41.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.303 Test: blockdev write read max offset ...passed 00:11:41.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.303 Test: blockdev writev readv 8 blocks ...passed 00:11:41.303 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.303 Test: blockdev writev readv block ...passed 00:11:41.303 Test: blockdev writev readv size > 128k ...passed 00:11:41.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.303 Test: blockdev comparev and writev ...passed 00:11:41.303 Test: blockdev nvme passthru rw ...passed 00:11:41.303 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.303 Test: blockdev nvme admin passthru ...passed 00:11:41.303 Test: blockdev copy ...passed 00:11:41.303 Suite: bdevio tests on: Malloc1p1 00:11:41.303 Test: blockdev write read block ...passed 00:11:41.303 Test: blockdev write zeroes read block ...passed 00:11:41.303 Test: blockdev write zeroes read no split ...passed 00:11:41.303 Test: blockdev write zeroes read split ...passed 00:11:41.303 Test: blockdev write zeroes read split partial ...passed 00:11:41.303 Test: blockdev reset ...passed 00:11:41.303 Test: blockdev write read 8 blocks ...passed 00:11:41.303 Test: blockdev write read size > 128k ...passed 00:11:41.303 Test: blockdev write read invalid size ...passed 00:11:41.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.303 Test: blockdev write read max offset ...passed 00:11:41.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.303 Test: blockdev writev readv 8 blocks ...passed 00:11:41.303 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.303 Test: blockdev writev readv block ...passed 00:11:41.303 Test: blockdev writev readv size > 128k ...passed 00:11:41.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.303 Test: blockdev comparev and writev ...passed 00:11:41.303 Test: blockdev nvme passthru rw ...passed 00:11:41.303 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.303 Test: blockdev nvme admin passthru ...passed 00:11:41.303 Test: blockdev copy ...passed 00:11:41.303 Suite: bdevio tests on: Malloc1p0 00:11:41.303 Test: blockdev write read block ...passed 00:11:41.303 Test: blockdev write zeroes read block ...passed 00:11:41.303 Test: blockdev write zeroes read no split ...passed 00:11:41.303 Test: blockdev write zeroes read split ...passed 00:11:41.303 Test: blockdev write zeroes read split partial ...passed 00:11:41.303 Test: blockdev reset ...passed 00:11:41.303 Test: blockdev write read 8 blocks ...passed 00:11:41.303 Test: blockdev write read size > 128k ...passed 00:11:41.303 Test: blockdev write read invalid size ...passed 00:11:41.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.303 Test: blockdev write read max offset ...passed 00:11:41.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.303 Test: blockdev writev readv 8 blocks ...passed 00:11:41.303 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.303 Test: blockdev writev readv block ...passed 00:11:41.303 Test: blockdev writev readv size > 128k ...passed 00:11:41.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.303 Test: blockdev comparev and writev ...passed 00:11:41.303 Test: blockdev nvme passthru rw ...passed 00:11:41.303 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.303 Test: blockdev nvme admin passthru ...passed 00:11:41.303 Test: blockdev copy ...passed 00:11:41.303 Suite: bdevio tests on: Malloc0 00:11:41.303 Test: blockdev write read block ...passed 00:11:41.303 Test: blockdev write zeroes read block ...passed 00:11:41.303 Test: blockdev write zeroes read no split ...passed 00:11:41.303 Test: blockdev write zeroes read split ...passed 00:11:41.303 Test: blockdev write zeroes read split partial ...passed 00:11:41.303 Test: blockdev reset ...passed 00:11:41.303 Test: blockdev write read 8 blocks ...passed 00:11:41.303 Test: blockdev write read size > 128k ...passed 00:11:41.303 Test: blockdev write read invalid size ...passed 00:11:41.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:41.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:41.303 Test: blockdev write read max offset ...passed 00:11:41.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:41.303 Test: blockdev writev readv 8 blocks ...passed 00:11:41.303 Test: blockdev writev readv 30 x 1block ...passed 00:11:41.303 Test: blockdev writev readv block ...passed 00:11:41.303 Test: blockdev writev readv size > 128k ...passed 00:11:41.303 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:41.303 Test: blockdev comparev and writev ...passed 00:11:41.303 Test: blockdev nvme passthru rw ...passed 00:11:41.303 Test: blockdev nvme passthru vendor specific ...passed 00:11:41.303 Test: blockdev nvme admin passthru ...passed 00:11:41.303 Test: blockdev copy ...passed 00:11:41.303 00:11:41.303 Run Summary: Type Total Ran Passed Failed Inactive 00:11:41.303 suites 16 16 n/a 0 0 00:11:41.303 tests 368 368 368 0 0 00:11:41.303 asserts 2224 2224 2224 0 n/a 00:11:41.303 00:11:41.303 Elapsed time = 2.674 seconds 00:11:41.303 0 00:11:41.303 21:34:01 -- bdev/blockdev.sh@293 -- # killprocess 65558 00:11:41.303 21:34:01 -- common/autotest_common.sh@936 -- # '[' -z 65558 ']' 00:11:41.303 21:34:01 -- common/autotest_common.sh@940 -- # kill -0 65558 00:11:41.303 21:34:01 -- common/autotest_common.sh@941 -- # uname 00:11:41.303 21:34:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:41.303 21:34:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65558 00:11:41.562 21:34:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:41.562 21:34:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:41.562 21:34:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65558' 00:11:41.562 killing process with pid 65558 00:11:41.562 21:34:01 -- common/autotest_common.sh@955 -- # kill 65558 00:11:41.562 21:34:01 -- common/autotest_common.sh@960 -- # wait 65558 00:11:43.465 21:34:03 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:43.465 00:11:43.465 real 0m4.423s 00:11:43.465 user 0m11.585s 00:11:43.465 sys 0m0.582s 00:11:43.465 21:34:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:43.465 ************************************ 00:11:43.465 END TEST bdev_bounds 00:11:43.465 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 ************************************ 00:11:43.465 21:34:03 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:43.465 21:34:03 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:43.465 21:34:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:43.465 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:11:43.465 ************************************ 00:11:43.465 START TEST bdev_nbd 00:11:43.465 ************************************ 00:11:43.465 21:34:03 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:43.465 21:34:03 -- bdev/blockdev.sh@298 -- # uname -s 00:11:43.465 21:34:03 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:43.465 21:34:03 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:43.465 21:34:03 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:43.465 21:34:03 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:43.465 21:34:03 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:43.465 21:34:03 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:43.465 21:34:03 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:43.465 21:34:03 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:43.465 21:34:03 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:43.465 21:34:03 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:43.465 21:34:03 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:43.465 21:34:03 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:43.465 21:34:03 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:43.465 21:34:03 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:43.465 21:34:03 -- bdev/blockdev.sh@316 -- # nbd_pid=65643 00:11:43.465 21:34:03 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:43.465 21:34:03 -- bdev/blockdev.sh@318 -- # waitforlisten 65643 /var/tmp/spdk-nbd.sock 00:11:43.465 21:34:03 -- common/autotest_common.sh@829 -- # '[' -z 65643 ']' 00:11:43.465 21:34:03 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:43.465 21:34:03 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:43.465 21:34:03 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:43.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:43.465 21:34:03 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:43.466 21:34:03 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:43.466 21:34:03 -- common/autotest_common.sh@10 -- # set +x 00:11:43.466 [2024-12-06 21:34:03.593429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:43.466 [2024-12-06 21:34:03.593620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.466 [2024-12-06 21:34:03.766977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.466 [2024-12-06 21:34:03.935786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.035 [2024-12-06 21:34:04.249723] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.035 [2024-12-06 21:34:04.249857] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:44.035 [2024-12-06 21:34:04.257691] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.035 [2024-12-06 21:34:04.257746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:44.035 [2024-12-06 21:34:04.265711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.035 [2024-12-06 21:34:04.265759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:44.035 [2024-12-06 21:34:04.265775] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:44.035 [2024-12-06 21:34:04.443467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:44.035 [2024-12-06 21:34:04.443566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.035 [2024-12-06 21:34:04.443592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:11:44.035 [2024-12-06 21:34:04.443605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.035 [2024-12-06 21:34:04.446290] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.035 [2024-12-06 21:34:04.446348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:44.971 21:34:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.971 21:34:05 -- common/autotest_common.sh@862 -- # return 0 00:11:44.971 21:34:05 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@24 -- # local i 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:44.971 21:34:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:45.229 21:34:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:45.229 21:34:05 -- common/autotest_common.sh@867 -- # local i 00:11:45.229 21:34:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.229 21:34:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.229 21:34:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:45.229 21:34:05 -- common/autotest_common.sh@871 -- # break 00:11:45.229 21:34:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.229 21:34:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.229 21:34:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.229 1+0 records in 00:11:45.229 1+0 records out 00:11:45.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231078 s, 17.7 MB/s 00:11:45.229 21:34:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.229 21:34:05 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.229 21:34:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.229 21:34:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.229 21:34:05 -- common/autotest_common.sh@887 -- # return 0 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.229 21:34:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:45.487 21:34:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:45.487 21:34:05 -- common/autotest_common.sh@867 -- # local i 00:11:45.487 21:34:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.487 21:34:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.487 21:34:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:45.487 21:34:05 -- common/autotest_common.sh@871 -- # break 00:11:45.487 21:34:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.487 21:34:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.487 21:34:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.487 1+0 records in 00:11:45.487 1+0 records out 00:11:45.487 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238351 s, 17.2 MB/s 00:11:45.487 21:34:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.487 21:34:05 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.487 21:34:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.487 21:34:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.487 21:34:05 -- common/autotest_common.sh@887 -- # return 0 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.487 21:34:05 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:45.745 21:34:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:45.745 21:34:06 -- common/autotest_common.sh@867 -- # local i 00:11:45.745 21:34:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:45.745 21:34:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:45.745 21:34:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:45.745 21:34:06 -- common/autotest_common.sh@871 -- # break 00:11:45.745 21:34:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:45.745 21:34:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:45.745 21:34:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:45.745 1+0 records in 00:11:45.745 1+0 records out 00:11:45.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318599 s, 12.9 MB/s 00:11:45.745 21:34:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.745 21:34:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:45.745 21:34:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:45.745 21:34:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:45.745 21:34:06 -- common/autotest_common.sh@887 -- # return 0 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:45.745 21:34:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:46.003 21:34:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:46.003 21:34:06 -- common/autotest_common.sh@867 -- # local i 00:11:46.003 21:34:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.003 21:34:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.003 21:34:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:46.003 21:34:06 -- common/autotest_common.sh@871 -- # break 00:11:46.003 21:34:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.003 21:34:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.003 21:34:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.003 1+0 records in 00:11:46.003 1+0 records out 00:11:46.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453346 s, 9.0 MB/s 00:11:46.003 21:34:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.003 21:34:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.003 21:34:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.003 21:34:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.003 21:34:06 -- common/autotest_common.sh@887 -- # return 0 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.003 21:34:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:46.262 21:34:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:46.262 21:34:06 -- common/autotest_common.sh@867 -- # local i 00:11:46.262 21:34:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.262 21:34:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.262 21:34:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:46.262 21:34:06 -- common/autotest_common.sh@871 -- # break 00:11:46.262 21:34:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.262 21:34:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.262 21:34:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.262 1+0 records in 00:11:46.262 1+0 records out 00:11:46.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704695 s, 5.8 MB/s 00:11:46.262 21:34:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.262 21:34:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.262 21:34:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.262 21:34:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.262 21:34:06 -- common/autotest_common.sh@887 -- # return 0 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.262 21:34:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:46.520 21:34:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:46.520 21:34:06 -- common/autotest_common.sh@867 -- # local i 00:11:46.520 21:34:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.520 21:34:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.520 21:34:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:46.520 21:34:06 -- common/autotest_common.sh@871 -- # break 00:11:46.520 21:34:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.520 21:34:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.520 21:34:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.520 1+0 records in 00:11:46.520 1+0 records out 00:11:46.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529333 s, 7.7 MB/s 00:11:46.520 21:34:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.520 21:34:06 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.520 21:34:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.520 21:34:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.520 21:34:06 -- common/autotest_common.sh@887 -- # return 0 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.520 21:34:06 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:46.778 21:34:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:46.778 21:34:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:46.778 21:34:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:46.778 21:34:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:46.778 21:34:07 -- common/autotest_common.sh@867 -- # local i 00:11:46.778 21:34:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:46.778 21:34:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:46.778 21:34:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:46.778 21:34:07 -- common/autotest_common.sh@871 -- # break 00:11:46.778 21:34:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:46.778 21:34:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:46.779 21:34:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.779 1+0 records in 00:11:46.779 1+0 records out 00:11:46.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067495 s, 6.1 MB/s 00:11:46.779 21:34:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.779 21:34:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:46.779 21:34:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.779 21:34:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:46.779 21:34:07 -- common/autotest_common.sh@887 -- # return 0 00:11:46.779 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:46.779 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:46.779 21:34:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:47.037 21:34:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:47.037 21:34:07 -- common/autotest_common.sh@867 -- # local i 00:11:47.037 21:34:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.037 21:34:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.037 21:34:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:47.037 21:34:07 -- common/autotest_common.sh@871 -- # break 00:11:47.037 21:34:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.037 21:34:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.037 21:34:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.037 1+0 records in 00:11:47.037 1+0 records out 00:11:47.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048837 s, 8.4 MB/s 00:11:47.037 21:34:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.037 21:34:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.037 21:34:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.037 21:34:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.037 21:34:07 -- common/autotest_common.sh@887 -- # return 0 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.037 21:34:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:47.295 21:34:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:47.295 21:34:07 -- common/autotest_common.sh@867 -- # local i 00:11:47.295 21:34:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.295 21:34:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.295 21:34:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:47.295 21:34:07 -- common/autotest_common.sh@871 -- # break 00:11:47.295 21:34:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.295 21:34:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.295 21:34:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.295 1+0 records in 00:11:47.295 1+0 records out 00:11:47.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000408421 s, 10.0 MB/s 00:11:47.295 21:34:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.295 21:34:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.295 21:34:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.295 21:34:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.295 21:34:07 -- common/autotest_common.sh@887 -- # return 0 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.295 21:34:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:47.553 21:34:07 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:47.553 21:34:07 -- common/autotest_common.sh@867 -- # local i 00:11:47.553 21:34:07 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.553 21:34:07 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.553 21:34:07 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:47.553 21:34:07 -- common/autotest_common.sh@871 -- # break 00:11:47.553 21:34:07 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.553 21:34:07 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.553 21:34:07 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.553 1+0 records in 00:11:47.553 1+0 records out 00:11:47.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677917 s, 6.0 MB/s 00:11:47.553 21:34:07 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.553 21:34:07 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.553 21:34:07 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.553 21:34:07 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.553 21:34:07 -- common/autotest_common.sh@887 -- # return 0 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.553 21:34:07 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:47.810 21:34:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:47.811 21:34:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:47.811 21:34:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:47.811 21:34:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:47.811 21:34:08 -- common/autotest_common.sh@867 -- # local i 00:11:47.811 21:34:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:47.811 21:34:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:47.811 21:34:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:47.811 21:34:08 -- common/autotest_common.sh@871 -- # break 00:11:47.811 21:34:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:47.811 21:34:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:47.811 21:34:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:47.811 1+0 records in 00:11:47.811 1+0 records out 00:11:47.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000883802 s, 4.6 MB/s 00:11:47.811 21:34:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.811 21:34:08 -- common/autotest_common.sh@884 -- # size=4096 00:11:47.811 21:34:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:47.811 21:34:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:47.811 21:34:08 -- common/autotest_common.sh@887 -- # return 0 00:11:47.811 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:47.811 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:47.811 21:34:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:48.068 21:34:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:48.068 21:34:08 -- common/autotest_common.sh@867 -- # local i 00:11:48.068 21:34:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.068 21:34:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.068 21:34:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:48.068 21:34:08 -- common/autotest_common.sh@871 -- # break 00:11:48.068 21:34:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.068 21:34:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.068 21:34:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.068 1+0 records in 00:11:48.068 1+0 records out 00:11:48.068 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657306 s, 6.2 MB/s 00:11:48.068 21:34:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.068 21:34:08 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.068 21:34:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.068 21:34:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.068 21:34:08 -- common/autotest_common.sh@887 -- # return 0 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.068 21:34:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:48.326 21:34:08 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:48.326 21:34:08 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:48.326 21:34:08 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:48.326 21:34:08 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:48.326 21:34:08 -- common/autotest_common.sh@867 -- # local i 00:11:48.326 21:34:08 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.326 21:34:08 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.326 21:34:08 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:48.326 21:34:08 -- common/autotest_common.sh@871 -- # break 00:11:48.326 21:34:08 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.326 21:34:08 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.326 21:34:08 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.326 1+0 records in 00:11:48.326 1+0 records out 00:11:48.326 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690421 s, 5.9 MB/s 00:11:48.326 21:34:08 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.584 21:34:08 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.584 21:34:08 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.584 21:34:08 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.584 21:34:08 -- common/autotest_common.sh@887 -- # return 0 00:11:48.584 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.584 21:34:08 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.584 21:34:08 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:48.842 21:34:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:48.842 21:34:09 -- common/autotest_common.sh@867 -- # local i 00:11:48.842 21:34:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:48.842 21:34:09 -- common/autotest_common.sh@871 -- # break 00:11:48.842 21:34:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.842 1+0 records in 00:11:48.842 1+0 records out 00:11:48.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000935764 s, 4.4 MB/s 00:11:48.842 21:34:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.842 21:34:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.842 21:34:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.842 21:34:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.842 21:34:09 -- common/autotest_common.sh@887 -- # return 0 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:48.842 21:34:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:48.842 21:34:09 -- common/autotest_common.sh@867 -- # local i 00:11:48.842 21:34:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:48.842 21:34:09 -- common/autotest_common.sh@871 -- # break 00:11:48.842 21:34:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:48.842 21:34:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:48.842 1+0 records in 00:11:48.842 1+0 records out 00:11:48.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611611 s, 6.7 MB/s 00:11:48.842 21:34:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.842 21:34:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:48.842 21:34:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:48.842 21:34:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:48.842 21:34:09 -- common/autotest_common.sh@887 -- # return 0 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:48.842 21:34:09 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:49.100 21:34:09 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:49.100 21:34:09 -- common/autotest_common.sh@867 -- # local i 00:11:49.100 21:34:09 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:49.100 21:34:09 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:49.100 21:34:09 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:49.100 21:34:09 -- common/autotest_common.sh@871 -- # break 00:11:49.100 21:34:09 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:49.100 21:34:09 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:49.100 21:34:09 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:49.100 1+0 records in 00:11:49.100 1+0 records out 00:11:49.100 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118927 s, 3.4 MB/s 00:11:49.100 21:34:09 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.100 21:34:09 -- common/autotest_common.sh@884 -- # size=4096 00:11:49.100 21:34:09 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:49.100 21:34:09 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:49.100 21:34:09 -- common/autotest_common.sh@887 -- # return 0 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:49.100 21:34:09 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:49.357 21:34:09 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd0", 00:11:49.357 "bdev_name": "Malloc0" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd1", 00:11:49.357 "bdev_name": "Malloc1p0" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd2", 00:11:49.357 "bdev_name": "Malloc1p1" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd3", 00:11:49.357 "bdev_name": "Malloc2p0" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd4", 00:11:49.357 "bdev_name": "Malloc2p1" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd5", 00:11:49.357 "bdev_name": "Malloc2p2" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd6", 00:11:49.357 "bdev_name": "Malloc2p3" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd7", 00:11:49.357 "bdev_name": "Malloc2p4" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd8", 00:11:49.357 "bdev_name": "Malloc2p5" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd9", 00:11:49.357 "bdev_name": "Malloc2p6" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd10", 00:11:49.357 "bdev_name": "Malloc2p7" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd11", 00:11:49.357 "bdev_name": "TestPT" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd12", 00:11:49.357 "bdev_name": "raid0" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd13", 00:11:49.357 "bdev_name": "concat0" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd14", 00:11:49.357 "bdev_name": "raid1" 00:11:49.357 }, 00:11:49.357 { 00:11:49.357 "nbd_device": "/dev/nbd15", 00:11:49.357 "bdev_name": "AIO0" 00:11:49.357 } 00:11:49.358 ]' 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd0", 00:11:49.358 "bdev_name": "Malloc0" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd1", 00:11:49.358 "bdev_name": "Malloc1p0" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd2", 00:11:49.358 "bdev_name": "Malloc1p1" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd3", 00:11:49.358 "bdev_name": "Malloc2p0" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd4", 00:11:49.358 "bdev_name": "Malloc2p1" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd5", 00:11:49.358 "bdev_name": "Malloc2p2" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd6", 00:11:49.358 "bdev_name": "Malloc2p3" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd7", 00:11:49.358 "bdev_name": "Malloc2p4" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd8", 00:11:49.358 "bdev_name": "Malloc2p5" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd9", 00:11:49.358 "bdev_name": "Malloc2p6" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd10", 00:11:49.358 "bdev_name": "Malloc2p7" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd11", 00:11:49.358 "bdev_name": "TestPT" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd12", 00:11:49.358 "bdev_name": "raid0" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd13", 00:11:49.358 "bdev_name": "concat0" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd14", 00:11:49.358 "bdev_name": "raid1" 00:11:49.358 }, 00:11:49.358 { 00:11:49.358 "nbd_device": "/dev/nbd15", 00:11:49.358 "bdev_name": "AIO0" 00:11:49.358 } 00:11:49.358 ]' 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@51 -- # local i 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.358 21:34:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@41 -- # break 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.616 21:34:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@41 -- # break 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:49.874 21:34:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@41 -- # break 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.131 21:34:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@41 -- # break 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.388 21:34:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@41 -- # break 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.645 21:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@41 -- # break 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:50.902 21:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@41 -- # break 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.159 21:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:51.417 21:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@41 -- # break 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.418 21:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@41 -- # break 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.676 21:34:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@41 -- # break 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.934 21:34:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@41 -- # break 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@41 -- # break 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.192 21:34:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@41 -- # break 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.450 21:34:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.707 21:34:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@41 -- # break 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.707 21:34:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.708 21:34:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:52.965 21:34:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@41 -- # break 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:52.966 21:34:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:53.223 21:34:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@41 -- # break 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@45 -- # return 0 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.224 21:34:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@65 -- # true 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@65 -- # count=0 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@122 -- # count=0 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@127 -- # return 0 00:11:53.481 21:34:13 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.481 21:34:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:53.482 21:34:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.482 21:34:13 -- bdev/nbd_common.sh@12 -- # local i 00:11:53.482 21:34:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.482 21:34:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.482 21:34:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:53.739 /dev/nbd0 00:11:53.739 21:34:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.739 21:34:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.739 21:34:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:11:53.739 21:34:14 -- common/autotest_common.sh@867 -- # local i 00:11:53.739 21:34:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:53.739 21:34:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:53.739 21:34:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:11:53.739 21:34:14 -- common/autotest_common.sh@871 -- # break 00:11:53.739 21:34:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:53.739 21:34:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:53.739 21:34:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.739 1+0 records in 00:11:53.739 1+0 records out 00:11:53.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264055 s, 15.5 MB/s 00:11:53.739 21:34:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.739 21:34:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:53.739 21:34:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.739 21:34:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:53.739 21:34:14 -- common/autotest_common.sh@887 -- # return 0 00:11:53.739 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.739 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.739 21:34:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:53.997 /dev/nbd1 00:11:53.997 21:34:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:53.997 21:34:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:53.997 21:34:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:11:53.997 21:34:14 -- common/autotest_common.sh@867 -- # local i 00:11:53.997 21:34:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:53.997 21:34:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:53.997 21:34:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:11:53.997 21:34:14 -- common/autotest_common.sh@871 -- # break 00:11:53.997 21:34:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:53.997 21:34:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:53.997 21:34:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.997 1+0 records in 00:11:53.997 1+0 records out 00:11:53.997 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030786 s, 13.3 MB/s 00:11:53.997 21:34:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.997 21:34:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:53.997 21:34:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.997 21:34:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:53.997 21:34:14 -- common/autotest_common.sh@887 -- # return 0 00:11:53.997 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.997 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:53.997 21:34:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:54.255 /dev/nbd10 00:11:54.255 21:34:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:54.255 21:34:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:54.255 21:34:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:11:54.255 21:34:14 -- common/autotest_common.sh@867 -- # local i 00:11:54.255 21:34:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.255 21:34:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.255 21:34:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:11:54.255 21:34:14 -- common/autotest_common.sh@871 -- # break 00:11:54.255 21:34:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.255 21:34:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.255 21:34:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.255 1+0 records in 00:11:54.255 1+0 records out 00:11:54.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294563 s, 13.9 MB/s 00:11:54.255 21:34:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.255 21:34:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.255 21:34:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.255 21:34:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.255 21:34:14 -- common/autotest_common.sh@887 -- # return 0 00:11:54.255 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.255 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.255 21:34:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:54.513 /dev/nbd11 00:11:54.513 21:34:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:54.513 21:34:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:54.513 21:34:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:11:54.513 21:34:14 -- common/autotest_common.sh@867 -- # local i 00:11:54.513 21:34:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.513 21:34:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.513 21:34:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:11:54.513 21:34:14 -- common/autotest_common.sh@871 -- # break 00:11:54.513 21:34:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.513 21:34:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.513 21:34:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.513 1+0 records in 00:11:54.513 1+0 records out 00:11:54.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401272 s, 10.2 MB/s 00:11:54.513 21:34:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.513 21:34:14 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.513 21:34:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.513 21:34:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.513 21:34:14 -- common/autotest_common.sh@887 -- # return 0 00:11:54.513 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.513 21:34:14 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.513 21:34:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:54.781 /dev/nbd12 00:11:54.781 21:34:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:54.781 21:34:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:54.781 21:34:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:11:54.781 21:34:15 -- common/autotest_common.sh@867 -- # local i 00:11:54.781 21:34:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:54.781 21:34:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:54.781 21:34:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:11:54.781 21:34:15 -- common/autotest_common.sh@871 -- # break 00:11:54.781 21:34:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:54.781 21:34:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:54.782 21:34:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:54.782 1+0 records in 00:11:54.782 1+0 records out 00:11:54.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391514 s, 10.5 MB/s 00:11:54.782 21:34:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.782 21:34:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:54.782 21:34:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:54.782 21:34:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:54.782 21:34:15 -- common/autotest_common.sh@887 -- # return 0 00:11:54.782 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:54.782 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:54.782 21:34:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:55.046 /dev/nbd13 00:11:55.046 21:34:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:55.046 21:34:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:55.046 21:34:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:11:55.046 21:34:15 -- common/autotest_common.sh@867 -- # local i 00:11:55.046 21:34:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.046 21:34:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.046 21:34:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:11:55.046 21:34:15 -- common/autotest_common.sh@871 -- # break 00:11:55.046 21:34:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.046 21:34:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.046 21:34:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.046 1+0 records in 00:11:55.046 1+0 records out 00:11:55.046 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468664 s, 8.7 MB/s 00:11:55.046 21:34:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.046 21:34:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.046 21:34:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.046 21:34:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.046 21:34:15 -- common/autotest_common.sh@887 -- # return 0 00:11:55.046 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.046 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.046 21:34:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:55.317 /dev/nbd14 00:11:55.317 21:34:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:55.317 21:34:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:55.317 21:34:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:11:55.317 21:34:15 -- common/autotest_common.sh@867 -- # local i 00:11:55.317 21:34:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.317 21:34:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.317 21:34:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:11:55.317 21:34:15 -- common/autotest_common.sh@871 -- # break 00:11:55.317 21:34:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.317 21:34:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.317 21:34:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.317 1+0 records in 00:11:55.317 1+0 records out 00:11:55.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000503954 s, 8.1 MB/s 00:11:55.317 21:34:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.317 21:34:15 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.317 21:34:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.317 21:34:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.317 21:34:15 -- common/autotest_common.sh@887 -- # return 0 00:11:55.317 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.317 21:34:15 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.317 21:34:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:55.589 /dev/nbd15 00:11:55.589 21:34:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:55.589 21:34:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:55.589 21:34:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd15 00:11:55.589 21:34:15 -- common/autotest_common.sh@867 -- # local i 00:11:55.589 21:34:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.589 21:34:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.589 21:34:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd15 /proc/partitions 00:11:55.589 21:34:15 -- common/autotest_common.sh@871 -- # break 00:11:55.589 21:34:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.589 21:34:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.589 21:34:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.589 1+0 records in 00:11:55.589 1+0 records out 00:11:55.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444618 s, 9.2 MB/s 00:11:55.589 21:34:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.589 21:34:16 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.589 21:34:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.589 21:34:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.589 21:34:16 -- common/autotest_common.sh@887 -- # return 0 00:11:55.589 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.589 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.589 21:34:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:55.847 /dev/nbd2 00:11:55.847 21:34:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:55.847 21:34:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:55.847 21:34:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:11:55.847 21:34:16 -- common/autotest_common.sh@867 -- # local i 00:11:55.847 21:34:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:55.847 21:34:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:55.847 21:34:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:11:55.847 21:34:16 -- common/autotest_common.sh@871 -- # break 00:11:55.847 21:34:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:55.847 21:34:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:55.847 21:34:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:55.847 1+0 records in 00:11:55.847 1+0 records out 00:11:55.847 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415845 s, 9.8 MB/s 00:11:55.847 21:34:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.847 21:34:16 -- common/autotest_common.sh@884 -- # size=4096 00:11:55.847 21:34:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:55.847 21:34:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:55.847 21:34:16 -- common/autotest_common.sh@887 -- # return 0 00:11:55.847 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:55.847 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:55.847 21:34:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:56.105 /dev/nbd3 00:11:56.105 21:34:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:56.105 21:34:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:56.105 21:34:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:11:56.105 21:34:16 -- common/autotest_common.sh@867 -- # local i 00:11:56.105 21:34:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.105 21:34:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.105 21:34:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:11:56.105 21:34:16 -- common/autotest_common.sh@871 -- # break 00:11:56.105 21:34:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.105 21:34:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.105 21:34:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.105 1+0 records in 00:11:56.105 1+0 records out 00:11:56.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611337 s, 6.7 MB/s 00:11:56.105 21:34:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.105 21:34:16 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.105 21:34:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.105 21:34:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.105 21:34:16 -- common/autotest_common.sh@887 -- # return 0 00:11:56.105 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.105 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.105 21:34:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:56.364 /dev/nbd4 00:11:56.364 21:34:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:56.364 21:34:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:56.364 21:34:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:11:56.364 21:34:16 -- common/autotest_common.sh@867 -- # local i 00:11:56.364 21:34:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.364 21:34:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.364 21:34:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:11:56.364 21:34:16 -- common/autotest_common.sh@871 -- # break 00:11:56.364 21:34:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.364 21:34:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.364 21:34:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.364 1+0 records in 00:11:56.364 1+0 records out 00:11:56.364 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520868 s, 7.9 MB/s 00:11:56.364 21:34:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.364 21:34:16 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.364 21:34:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.364 21:34:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.364 21:34:16 -- common/autotest_common.sh@887 -- # return 0 00:11:56.364 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.364 21:34:16 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.364 21:34:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:56.622 /dev/nbd5 00:11:56.622 21:34:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:56.622 21:34:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:56.622 21:34:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:11:56.622 21:34:17 -- common/autotest_common.sh@867 -- # local i 00:11:56.622 21:34:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.622 21:34:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.622 21:34:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:11:56.622 21:34:17 -- common/autotest_common.sh@871 -- # break 00:11:56.622 21:34:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.622 21:34:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.622 21:34:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.622 1+0 records in 00:11:56.622 1+0 records out 00:11:56.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701247 s, 5.8 MB/s 00:11:56.622 21:34:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.622 21:34:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.622 21:34:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.622 21:34:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.622 21:34:17 -- common/autotest_common.sh@887 -- # return 0 00:11:56.622 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.622 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.622 21:34:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:56.880 /dev/nbd6 00:11:56.880 21:34:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:56.880 21:34:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:56.880 21:34:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:11:56.880 21:34:17 -- common/autotest_common.sh@867 -- # local i 00:11:56.880 21:34:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:56.880 21:34:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:56.880 21:34:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:11:56.880 21:34:17 -- common/autotest_common.sh@871 -- # break 00:11:56.880 21:34:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:56.880 21:34:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:56.880 21:34:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:56.880 1+0 records in 00:11:56.880 1+0 records out 00:11:56.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000714703 s, 5.7 MB/s 00:11:56.880 21:34:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.880 21:34:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:56.880 21:34:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:56.880 21:34:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:56.880 21:34:17 -- common/autotest_common.sh@887 -- # return 0 00:11:56.880 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:56.880 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:56.880 21:34:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:57.138 /dev/nbd7 00:11:57.138 21:34:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:57.138 21:34:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:57.138 21:34:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd7 00:11:57.138 21:34:17 -- common/autotest_common.sh@867 -- # local i 00:11:57.138 21:34:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.138 21:34:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.138 21:34:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd7 /proc/partitions 00:11:57.138 21:34:17 -- common/autotest_common.sh@871 -- # break 00:11:57.138 21:34:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.138 21:34:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.138 21:34:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.138 1+0 records in 00:11:57.138 1+0 records out 00:11:57.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563852 s, 7.3 MB/s 00:11:57.138 21:34:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.138 21:34:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.138 21:34:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.138 21:34:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.138 21:34:17 -- common/autotest_common.sh@887 -- # return 0 00:11:57.138 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.138 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.138 21:34:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:57.396 /dev/nbd8 00:11:57.396 21:34:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:57.396 21:34:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:57.396 21:34:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd8 00:11:57.396 21:34:17 -- common/autotest_common.sh@867 -- # local i 00:11:57.396 21:34:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.396 21:34:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.396 21:34:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd8 /proc/partitions 00:11:57.396 21:34:17 -- common/autotest_common.sh@871 -- # break 00:11:57.396 21:34:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.396 21:34:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.396 21:34:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.396 1+0 records in 00:11:57.396 1+0 records out 00:11:57.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788619 s, 5.2 MB/s 00:11:57.396 21:34:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.396 21:34:17 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.396 21:34:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.396 21:34:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.396 21:34:17 -- common/autotest_common.sh@887 -- # return 0 00:11:57.396 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.396 21:34:17 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.396 21:34:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:57.654 /dev/nbd9 00:11:57.654 21:34:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:57.654 21:34:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:57.655 21:34:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd9 00:11:57.655 21:34:18 -- common/autotest_common.sh@867 -- # local i 00:11:57.655 21:34:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:11:57.655 21:34:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:11:57.655 21:34:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd9 /proc/partitions 00:11:57.655 21:34:18 -- common/autotest_common.sh@871 -- # break 00:11:57.655 21:34:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:11:57.655 21:34:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:11:57.655 21:34:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.655 1+0 records in 00:11:57.655 1+0 records out 00:11:57.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112087 s, 3.7 MB/s 00:11:57.655 21:34:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.655 21:34:18 -- common/autotest_common.sh@884 -- # size=4096 00:11:57.655 21:34:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.655 21:34:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:11:57.655 21:34:18 -- common/autotest_common.sh@887 -- # return 0 00:11:57.655 21:34:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.655 21:34:18 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:57.655 21:34:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:57.655 21:34:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:57.655 21:34:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd0", 00:11:57.912 "bdev_name": "Malloc0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd1", 00:11:57.912 "bdev_name": "Malloc1p0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd10", 00:11:57.912 "bdev_name": "Malloc1p1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd11", 00:11:57.912 "bdev_name": "Malloc2p0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd12", 00:11:57.912 "bdev_name": "Malloc2p1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd13", 00:11:57.912 "bdev_name": "Malloc2p2" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd14", 00:11:57.912 "bdev_name": "Malloc2p3" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd15", 00:11:57.912 "bdev_name": "Malloc2p4" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd2", 00:11:57.912 "bdev_name": "Malloc2p5" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd3", 00:11:57.912 "bdev_name": "Malloc2p6" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd4", 00:11:57.912 "bdev_name": "Malloc2p7" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd5", 00:11:57.912 "bdev_name": "TestPT" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd6", 00:11:57.912 "bdev_name": "raid0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd7", 00:11:57.912 "bdev_name": "concat0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd8", 00:11:57.912 "bdev_name": "raid1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd9", 00:11:57.912 "bdev_name": "AIO0" 00:11:57.912 } 00:11:57.912 ]' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd0", 00:11:57.912 "bdev_name": "Malloc0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd1", 00:11:57.912 "bdev_name": "Malloc1p0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd10", 00:11:57.912 "bdev_name": "Malloc1p1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd11", 00:11:57.912 "bdev_name": "Malloc2p0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd12", 00:11:57.912 "bdev_name": "Malloc2p1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd13", 00:11:57.912 "bdev_name": "Malloc2p2" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd14", 00:11:57.912 "bdev_name": "Malloc2p3" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd15", 00:11:57.912 "bdev_name": "Malloc2p4" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd2", 00:11:57.912 "bdev_name": "Malloc2p5" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd3", 00:11:57.912 "bdev_name": "Malloc2p6" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd4", 00:11:57.912 "bdev_name": "Malloc2p7" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd5", 00:11:57.912 "bdev_name": "TestPT" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd6", 00:11:57.912 "bdev_name": "raid0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd7", 00:11:57.912 "bdev_name": "concat0" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd8", 00:11:57.912 "bdev_name": "raid1" 00:11:57.912 }, 00:11:57.912 { 00:11:57.912 "nbd_device": "/dev/nbd9", 00:11:57.912 "bdev_name": "AIO0" 00:11:57.912 } 00:11:57.912 ]' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:57.912 /dev/nbd1 00:11:57.912 /dev/nbd10 00:11:57.912 /dev/nbd11 00:11:57.912 /dev/nbd12 00:11:57.912 /dev/nbd13 00:11:57.912 /dev/nbd14 00:11:57.912 /dev/nbd15 00:11:57.912 /dev/nbd2 00:11:57.912 /dev/nbd3 00:11:57.912 /dev/nbd4 00:11:57.912 /dev/nbd5 00:11:57.912 /dev/nbd6 00:11:57.912 /dev/nbd7 00:11:57.912 /dev/nbd8 00:11:57.912 /dev/nbd9' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:57.912 /dev/nbd1 00:11:57.912 /dev/nbd10 00:11:57.912 /dev/nbd11 00:11:57.912 /dev/nbd12 00:11:57.912 /dev/nbd13 00:11:57.912 /dev/nbd14 00:11:57.912 /dev/nbd15 00:11:57.912 /dev/nbd2 00:11:57.912 /dev/nbd3 00:11:57.912 /dev/nbd4 00:11:57.912 /dev/nbd5 00:11:57.912 /dev/nbd6 00:11:57.912 /dev/nbd7 00:11:57.912 /dev/nbd8 00:11:57.912 /dev/nbd9' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@65 -- # count=16 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@95 -- # count=16 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:57.912 256+0 records in 00:11:57.912 256+0 records out 00:11:57.912 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00850827 s, 123 MB/s 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:57.912 21:34:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:58.170 256+0 records in 00:11:58.170 256+0 records out 00:11:58.170 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162963 s, 6.4 MB/s 00:11:58.170 21:34:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.170 21:34:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:58.427 256+0 records in 00:11:58.427 256+0 records out 00:11:58.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167028 s, 6.3 MB/s 00:11:58.427 21:34:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.427 21:34:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:58.427 256+0 records in 00:11:58.427 256+0 records out 00:11:58.427 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166683 s, 6.3 MB/s 00:11:58.427 21:34:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.427 21:34:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:58.684 256+0 records in 00:11:58.684 256+0 records out 00:11:58.684 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168066 s, 6.2 MB/s 00:11:58.684 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.684 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:58.941 256+0 records in 00:11:58.941 256+0 records out 00:11:58.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164774 s, 6.4 MB/s 00:11:58.941 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.941 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:58.941 256+0 records in 00:11:58.941 256+0 records out 00:11:58.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153341 s, 6.8 MB/s 00:11:58.941 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:58.941 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:59.200 256+0 records in 00:11:59.200 256+0 records out 00:11:59.200 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165519 s, 6.3 MB/s 00:11:59.200 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.200 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:59.458 256+0 records in 00:11:59.458 256+0 records out 00:11:59.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160468 s, 6.5 MB/s 00:11:59.458 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.458 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:59.458 256+0 records in 00:11:59.458 256+0 records out 00:11:59.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.166468 s, 6.3 MB/s 00:11:59.458 21:34:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.458 21:34:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:59.716 256+0 records in 00:11:59.716 256+0 records out 00:11:59.716 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165595 s, 6.3 MB/s 00:11:59.716 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.716 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:59.973 256+0 records in 00:11:59.973 256+0 records out 00:11:59.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.164933 s, 6.4 MB/s 00:11:59.973 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.973 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:59.973 256+0 records in 00:11:59.973 256+0 records out 00:11:59.973 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163504 s, 6.4 MB/s 00:11:59.973 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:59.973 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:12:00.230 256+0 records in 00:12:00.230 256+0 records out 00:12:00.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170056 s, 6.2 MB/s 00:12:00.230 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.230 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:12:00.489 256+0 records in 00:12:00.489 256+0 records out 00:12:00.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168188 s, 6.2 MB/s 00:12:00.489 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.489 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:12:00.489 256+0 records in 00:12:00.489 256+0 records out 00:12:00.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.173878 s, 6.0 MB/s 00:12:00.489 21:34:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:00.489 21:34:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:12:00.746 256+0 records in 00:12:00.746 256+0 records out 00:12:00.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.25139 s, 4.2 MB/s 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:00.746 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@51 -- # local i 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.004 21:34:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@41 -- # break 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.262 21:34:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@41 -- # break 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.520 21:34:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@41 -- # break 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:01.777 21:34:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@41 -- # break 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.035 21:34:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@41 -- # break 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.294 21:34:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@41 -- # break 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.552 21:34:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@41 -- # break 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.810 21:34:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@41 -- # break 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.068 21:34:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:03.325 21:34:23 -- bdev/nbd_common.sh@41 -- # break 00:12:03.326 21:34:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.326 21:34:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.326 21:34:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@41 -- # break 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.583 21:34:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@41 -- # break 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.840 21:34:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@41 -- # break 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.099 21:34:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@41 -- # break 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.357 21:34:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@41 -- # break 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.615 21:34:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@41 -- # break 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.872 21:34:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@41 -- # break 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@45 -- # return 0 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.129 21:34:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@65 -- # true 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@65 -- # count=0 00:12:05.387 21:34:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@104 -- # count=0 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@109 -- # return 0 00:12:05.388 21:34:25 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:05.388 21:34:25 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:05.645 malloc_lvol_verify 00:12:05.645 21:34:25 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:05.904 86224ad2-caab-4091-88e0-b20009c9594b 00:12:05.904 21:34:26 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:06.161 1eabb1fc-68c8-4e66-9c65-c522be4a8502 00:12:06.161 21:34:26 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:06.161 /dev/nbd0 00:12:06.161 21:34:26 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:06.162 mke2fs 1.47.0 (5-Feb-2023) 00:12:06.162 00:12:06.162 Filesystem too small for a journal 00:12:06.162 Discarding device blocks: 0/1024 done 00:12:06.162 Creating filesystem with 1024 4k blocks and 1024 inodes 00:12:06.162 00:12:06.162 Allocating group tables: 0/1 done 00:12:06.162 Writing inode tables: 0/1 done 00:12:06.162 Writing superblocks and filesystem accounting information: 0/1 done 00:12:06.162 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@51 -- # local i 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:06.162 21:34:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@41 -- # break 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@45 -- # return 0 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:06.419 21:34:26 -- bdev/nbd_common.sh@147 -- # return 0 00:12:06.419 21:34:26 -- bdev/blockdev.sh@324 -- # killprocess 65643 00:12:06.419 21:34:26 -- common/autotest_common.sh@936 -- # '[' -z 65643 ']' 00:12:06.419 21:34:26 -- common/autotest_common.sh@940 -- # kill -0 65643 00:12:06.419 21:34:26 -- common/autotest_common.sh@941 -- # uname 00:12:06.419 21:34:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.419 21:34:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65643 00:12:06.419 21:34:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:06.419 21:34:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:06.419 killing process with pid 65643 00:12:06.419 21:34:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65643' 00:12:06.419 21:34:26 -- common/autotest_common.sh@955 -- # kill 65643 00:12:06.419 21:34:26 -- common/autotest_common.sh@960 -- # wait 65643 00:12:08.945 21:34:28 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:08.945 00:12:08.945 real 0m25.305s 00:12:08.945 user 0m34.863s 00:12:08.945 sys 0m9.075s 00:12:08.945 21:34:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:08.945 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:12:08.945 ************************************ 00:12:08.945 END TEST bdev_nbd 00:12:08.945 ************************************ 00:12:08.945 21:34:28 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:08.945 21:34:28 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.945 21:34:28 -- common/autotest_common.sh@10 -- # set +x 00:12:08.945 ************************************ 00:12:08.945 START TEST bdev_fio 00:12:08.945 ************************************ 00:12:08.945 21:34:28 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@329 -- # local env_context 00:12:08.945 21:34:28 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:08.945 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:08.945 21:34:28 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:08.945 21:34:28 -- bdev/blockdev.sh@337 -- # echo '' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:08.945 21:34:28 -- bdev/blockdev.sh@337 -- # env_context= 00:12:08.945 21:34:28 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:08.945 21:34:28 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:08.945 21:34:28 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:08.945 21:34:28 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:08.945 21:34:28 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:08.945 21:34:28 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:08.945 21:34:28 -- common/autotest_common.sh@1290 -- # cat 00:12:08.945 21:34:28 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1303 -- # cat 00:12:08.945 21:34:28 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:08.945 21:34:28 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:08.945 21:34:28 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:08.945 21:34:28 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.945 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:12:08.945 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:12:08.945 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.946 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:12:08.946 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:12:08.946 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.946 21:34:28 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:12:08.946 21:34:28 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:12:08.946 21:34:28 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:08.946 21:34:29 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:12:08.946 21:34:29 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:12:08.946 21:34:29 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:08.946 21:34:29 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:08.946 21:34:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:08.946 21:34:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:08.946 21:34:29 -- common/autotest_common.sh@10 -- # set +x 00:12:08.946 ************************************ 00:12:08.946 START TEST bdev_fio_rw_verify 00:12:08.946 ************************************ 00:12:08.946 21:34:29 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:08.946 21:34:29 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:08.946 21:34:29 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:08.946 21:34:29 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:08.946 21:34:29 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:08.946 21:34:29 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.946 21:34:29 -- common/autotest_common.sh@1330 -- # shift 00:12:08.946 21:34:29 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:08.946 21:34:29 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:08.946 21:34:29 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:08.946 21:34:29 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:08.946 21:34:29 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:08.946 21:34:29 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:12:08.946 21:34:29 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:12:08.946 21:34:29 -- common/autotest_common.sh@1336 -- # break 00:12:08.946 21:34:29 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:08.946 21:34:29 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:08.946 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:08.946 fio-3.35 00:12:08.946 Starting 16 threads 00:12:21.205 00:12:21.205 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=66773: Fri Dec 6 21:34:40 2024 00:12:21.205 read: IOPS=81.4k, BW=318MiB/s (334MB/s)(3182MiB/10001msec) 00:12:21.205 slat (usec): min=2, max=10056, avg=35.17, stdev=233.71 00:12:21.205 clat (usec): min=10, max=15557, avg=275.45, stdev=678.71 00:12:21.205 lat (usec): min=29, max=15564, avg=310.62, stdev=715.83 00:12:21.205 clat percentiles (usec): 00:12:21.205 | 50.000th=[ 165], 99.000th=[ 4228], 99.900th=[ 7242], 99.990th=[ 9372], 00:12:21.205 | 99.999th=[13435] 00:12:21.205 write: IOPS=131k, BW=510MiB/s (535MB/s)(5042MiB/9883msec); 0 zone resets 00:12:21.205 slat (usec): min=6, max=23066, avg=59.68, stdev=311.51 00:12:21.205 clat (usec): min=9, max=15835, avg=352.07, stdev=766.01 00:12:21.205 lat (usec): min=40, max=23367, avg=411.75, stdev=823.76 00:12:21.205 clat percentiles (usec): 00:12:21.205 | 50.000th=[ 212], 99.000th=[ 4293], 99.900th=[ 7308], 99.990th=[11469], 00:12:21.205 | 99.999th=[15270] 00:12:21.205 bw ( KiB/s): min=371845, max=809447, per=98.98%, avg=517036.37, stdev=8238.07, samples=304 00:12:21.205 iops : min=92961, max=202361, avg=129258.32, stdev=2059.50, samples=304 00:12:21.205 lat (usec) : 10=0.01%, 20=0.01%, 50=0.63%, 100=13.99%, 250=57.61% 00:12:21.205 lat (usec) : 500=23.61%, 750=0.77%, 1000=0.14% 00:12:21.205 lat (msec) : 2=0.14%, 4=1.15%, 10=1.94%, 20=0.02% 00:12:21.205 cpu : usr=57.82%, sys=2.41%, ctx=227911, majf=0, minf=106550 00:12:21.205 IO depths : 1=11.3%, 2=24.0%, 4=51.7%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:21.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.205 complete : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:21.205 issued rwts: total=814544,1290665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:21.205 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:21.205 00:12:21.205 Run status group 0 (all jobs): 00:12:21.205 READ: bw=318MiB/s (334MB/s), 318MiB/s-318MiB/s (334MB/s-334MB/s), io=3182MiB (3336MB), run=10001-10001msec 00:12:21.205 WRITE: bw=510MiB/s (535MB/s), 510MiB/s-510MiB/s (535MB/s-535MB/s), io=5042MiB (5287MB), run=9883-9883msec 00:12:22.584 ----------------------------------------------------- 00:12:22.584 Suppressions used: 00:12:22.584 count bytes template 00:12:22.584 16 140 /usr/src/fio/parse.c 00:12:22.584 11924 1144704 /usr/src/fio/iolog.c 00:12:22.584 1 904 libcrypto.so 00:12:22.584 ----------------------------------------------------- 00:12:22.584 00:12:22.584 00:12:22.584 real 0m14.048s 00:12:22.584 user 1m37.262s 00:12:22.584 sys 0m4.915s 00:12:22.584 21:34:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:22.584 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:12:22.584 ************************************ 00:12:22.584 END TEST bdev_fio_rw_verify 00:12:22.584 ************************************ 00:12:22.846 21:34:43 -- bdev/blockdev.sh@348 -- # rm -f 00:12:22.846 21:34:43 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:22.846 21:34:43 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:22.846 21:34:43 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:22.846 21:34:43 -- common/autotest_common.sh@1270 -- # local workload=trim 00:12:22.846 21:34:43 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:12:22.847 21:34:43 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:22.847 21:34:43 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:22.847 21:34:43 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:22.847 21:34:43 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:12:22.847 21:34:43 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:22.847 21:34:43 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:22.847 21:34:43 -- common/autotest_common.sh@1290 -- # cat 00:12:22.847 21:34:43 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:12:22.847 21:34:43 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:12:22.847 21:34:43 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:12:22.847 21:34:43 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:22.848 21:34:43 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fc561d1b-591c-4214-a1a9-152e584b27fa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc561d1b-591c-4214-a1a9-152e584b27fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "e9b73305-0166-5820-8e5d-33b96bc4e0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e9b73305-0166-5820-8e5d-33b96bc4e0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e915eaf-0d76-5f4f-80f3-746ed3490b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e915eaf-0d76-5f4f-80f3-746ed3490b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7de26293-dbab-587c-8868-9a4ae4b4b2fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de26293-dbab-587c-8868-9a4ae4b4b2fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ad4db46d-0563-5409-9ee3-6e1267d4c526"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ad4db46d-0563-5409-9ee3-6e1267d4c526",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8ca2f68c-0df3-5fa8-a45e-ac906d79b842"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ca2f68c-0df3-5fa8-a45e-ac906d79b842",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0a545c12-d22d-50d1-889c-02223b6ca173"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0a545c12-d22d-50d1-889c-02223b6ca173",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "50e42ff9-56cb-5c63-995e-4579a1cdc08d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50e42ff9-56cb-5c63-995e-4579a1cdc08d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "bd0e04ee-67fc-588d-8765-2f15ae0f8360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd0e04ee-67fc-588d-8765-2f15ae0f8360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0f393409-9e3b-5cc8-bf12-17e2e84563d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f393409-9e3b-5cc8-bf12-17e2e84563d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "36050b0c-6140-598f-976d-ab4dc39fc87b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36050b0c-6140-598f-976d-ab4dc39fc87b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0035ee72-8d1f-4ebf-bc26-cb9fb0eb23c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0672dca1-8865-4234-b229-0cdfed22c1eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "778bcae9-5468-4fc8-b7ce-f38f034ef686"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1ed11d1-ef58-40d8-ba63-1358bd2b16ae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "df5e07ab-09af-4062-b4fb-08ead138a5a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a1800cf4-baec-4f3b-9ac7-320c08782cc4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7996097a-9eb9-4252-a6ae-ed3b929a5f2a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77ecd56a-bdba-48e4-a269-cc7adc62017f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9efd786f-acbd-4708-95a9-cef962d2666c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9efd786f-acbd-4708-95a9-cef962d2666c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:22.848 21:34:43 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:22.848 Malloc1p0 00:12:22.848 Malloc1p1 00:12:22.848 Malloc2p0 00:12:22.848 Malloc2p1 00:12:22.848 Malloc2p2 00:12:22.848 Malloc2p3 00:12:22.848 Malloc2p4 00:12:22.848 Malloc2p5 00:12:22.848 Malloc2p6 00:12:22.848 Malloc2p7 00:12:22.848 TestPT 00:12:22.848 raid0 00:12:22.848 concat0 ]] 00:12:22.848 21:34:43 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "fc561d1b-591c-4214-a1a9-152e584b27fa"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "fc561d1b-591c-4214-a1a9-152e584b27fa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "e9b73305-0166-5820-8e5d-33b96bc4e0cf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "e9b73305-0166-5820-8e5d-33b96bc4e0cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "6e915eaf-0d76-5f4f-80f3-746ed3490b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "6e915eaf-0d76-5f4f-80f3-746ed3490b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "7de26293-dbab-587c-8868-9a4ae4b4b2fd"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7de26293-dbab-587c-8868-9a4ae4b4b2fd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "ad4db46d-0563-5409-9ee3-6e1267d4c526"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "ad4db46d-0563-5409-9ee3-6e1267d4c526",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "8ca2f68c-0df3-5fa8-a45e-ac906d79b842"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8ca2f68c-0df3-5fa8-a45e-ac906d79b842",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "0a545c12-d22d-50d1-889c-02223b6ca173"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0a545c12-d22d-50d1-889c-02223b6ca173",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3da8c24e-47b1-5151-bcd9-9b6ab1293ff1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "50e42ff9-56cb-5c63-995e-4579a1cdc08d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "50e42ff9-56cb-5c63-995e-4579a1cdc08d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "bd0e04ee-67fc-588d-8765-2f15ae0f8360"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "bd0e04ee-67fc-588d-8765-2f15ae0f8360",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "0f393409-9e3b-5cc8-bf12-17e2e84563d6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0f393409-9e3b-5cc8-bf12-17e2e84563d6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "36050b0c-6140-598f-976d-ab4dc39fc87b"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "36050b0c-6140-598f-976d-ab4dc39fc87b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "c9ebc370-a4b7-4f5e-8d6d-d2d55635b680",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0035ee72-8d1f-4ebf-bc26-cb9fb0eb23c1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0672dca1-8865-4234-b229-0cdfed22c1eb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "778bcae9-5468-4fc8-b7ce-f38f034ef686"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "778bcae9-5468-4fc8-b7ce-f38f034ef686",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b1ed11d1-ef58-40d8-ba63-1358bd2b16ae",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "df5e07ab-09af-4062-b4fb-08ead138a5a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "a1800cf4-baec-4f3b-9ac7-320c08782cc4"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "a1800cf4-baec-4f3b-9ac7-320c08782cc4",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "7996097a-9eb9-4252-a6ae-ed3b929a5f2a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "77ecd56a-bdba-48e4-a269-cc7adc62017f",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "9efd786f-acbd-4708-95a9-cef962d2666c"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "9efd786f-acbd-4708-95a9-cef962d2666c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:22.849 21:34:43 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:22.849 21:34:43 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:22.849 21:34:43 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:22.849 21:34:43 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:22.849 21:34:43 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:22.849 21:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.849 21:34:43 -- common/autotest_common.sh@10 -- # set +x 00:12:22.849 ************************************ 00:12:22.849 START TEST bdev_fio_trim 00:12:22.849 ************************************ 00:12:22.849 21:34:43 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:22.849 21:34:43 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:22.849 21:34:43 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:22.849 21:34:43 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:22.849 21:34:43 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:22.849 21:34:43 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:22.849 21:34:43 -- common/autotest_common.sh@1330 -- # shift 00:12:22.849 21:34:43 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:22.849 21:34:43 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:22.849 21:34:43 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:22.849 21:34:43 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:22.849 21:34:43 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:22.849 21:34:43 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:12:22.849 21:34:43 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:12:22.849 21:34:43 -- common/autotest_common.sh@1336 -- # break 00:12:22.849 21:34:43 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:22.849 21:34:43 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:23.110 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:23.110 fio-3.35 00:12:23.110 Starting 14 threads 00:12:35.324 00:12:35.325 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=66978: Fri Dec 6 21:34:54 2024 00:12:35.325 write: IOPS=165k, BW=646MiB/s (678MB/s)(6465MiB/10002msec); 0 zone resets 00:12:35.325 slat (usec): min=2, max=12061, avg=30.48, stdev=193.86 00:12:35.325 clat (usec): min=27, max=13220, avg=215.34, stdev=506.47 00:12:35.325 lat (usec): min=41, max=13239, avg=245.82, stdev=540.91 00:12:35.325 clat percentiles (usec): 00:12:35.325 | 50.000th=[ 143], 99.000th=[ 4113], 99.900th=[ 6128], 99.990th=[ 7242], 00:12:35.325 | 99.999th=[10159] 00:12:35.325 bw ( KiB/s): min=482749, max=880301, per=100.00%, avg=662134.11, stdev=9602.96, samples=266 00:12:35.325 iops : min=120686, max=220074, avg=165532.74, stdev=2400.73, samples=266 00:12:35.325 trim: IOPS=165k, BW=646MiB/s (678MB/s)(6465MiB/10002msec); 0 zone resets 00:12:35.325 slat (usec): min=4, max=13045, avg=20.51, stdev=159.44 00:12:35.325 clat (usec): min=4, max=13239, avg=227.98, stdev=525.61 00:12:35.325 lat (usec): min=14, max=13281, avg=248.50, stdev=548.68 00:12:35.325 clat percentiles (usec): 00:12:35.325 | 50.000th=[ 159], 99.000th=[ 4146], 99.900th=[ 6194], 99.990th=[ 7242], 00:12:35.325 | 99.999th=[10290] 00:12:35.325 bw ( KiB/s): min=482781, max=880245, per=100.00%, avg=662134.11, stdev=9602.51, samples=266 00:12:35.325 iops : min=120694, max=220060, avg=165532.84, stdev=2400.62, samples=266 00:12:35.325 lat (usec) : 10=0.13%, 20=0.34%, 50=1.15%, 100=15.32%, 250=76.62% 00:12:35.325 lat (usec) : 500=4.58%, 750=0.17%, 1000=0.01% 00:12:35.325 lat (msec) : 2=0.04%, 4=0.49%, 10=1.15%, 20=0.01% 00:12:35.325 cpu : usr=68.46%, sys=0.97%, ctx=147315, majf=0, minf=15815 00:12:35.325 IO depths : 1=12.3%, 2=24.5%, 4=50.0%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:35.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.325 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.325 issued rwts: total=0,1655133,1655135,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.325 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:35.325 00:12:35.325 Run status group 0 (all jobs): 00:12:35.325 WRITE: bw=646MiB/s (678MB/s), 646MiB/s-646MiB/s (678MB/s-678MB/s), io=6465MiB (6779MB), run=10002-10002msec 00:12:35.325 TRIM: bw=646MiB/s (678MB/s), 646MiB/s-646MiB/s (678MB/s-678MB/s), io=6465MiB (6779MB), run=10002-10002msec 00:12:36.702 ----------------------------------------------------- 00:12:36.702 Suppressions used: 00:12:36.702 count bytes template 00:12:36.702 14 129 /usr/src/fio/parse.c 00:12:36.702 1 904 libcrypto.so 00:12:36.702 ----------------------------------------------------- 00:12:36.702 00:12:36.702 00:12:36.702 real 0m13.806s 00:12:36.702 user 1m40.188s 00:12:36.702 sys 0m2.591s 00:12:36.702 21:34:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.702 ************************************ 00:12:36.702 END TEST bdev_fio_trim 00:12:36.702 ************************************ 00:12:36.702 21:34:56 -- common/autotest_common.sh@10 -- # set +x 00:12:36.702 21:34:57 -- bdev/blockdev.sh@366 -- # rm -f 00:12:36.702 21:34:57 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:36.702 /home/vagrant/spdk_repo/spdk 00:12:36.702 21:34:57 -- bdev/blockdev.sh@368 -- # popd 00:12:36.702 21:34:57 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:36.702 00:12:36.702 real 0m28.134s 00:12:36.702 user 3m17.551s 00:12:36.702 sys 0m7.650s 00:12:36.702 21:34:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:36.702 ************************************ 00:12:36.702 END TEST bdev_fio 00:12:36.702 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:12:36.702 ************************************ 00:12:36.702 21:34:57 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:36.702 21:34:57 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:36.702 21:34:57 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:36.702 21:34:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.702 21:34:57 -- common/autotest_common.sh@10 -- # set +x 00:12:36.702 ************************************ 00:12:36.702 START TEST bdev_verify 00:12:36.702 ************************************ 00:12:36.702 21:34:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:36.702 [2024-12-06 21:34:57.144980] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:36.702 [2024-12-06 21:34:57.145156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67149 ] 00:12:36.960 [2024-12-06 21:34:57.319602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.219 [2024-12-06 21:34:57.617049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.219 [2024-12-06 21:34:57.617058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.477 [2024-12-06 21:34:57.954332] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.477 [2024-12-06 21:34:57.954465] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:37.477 [2024-12-06 21:34:57.962282] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.477 [2024-12-06 21:34:57.962343] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:37.477 [2024-12-06 21:34:57.970318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.477 [2024-12-06 21:34:57.970389] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:37.477 [2024-12-06 21:34:57.970430] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:37.736 [2024-12-06 21:34:58.136705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:37.736 [2024-12-06 21:34:58.136781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:37.736 [2024-12-06 21:34:58.136813] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:37.736 [2024-12-06 21:34:58.136827] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:37.736 [2024-12-06 21:34:58.139341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:37.736 [2024-12-06 21:34:58.139381] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:37.995 Running I/O for 5 seconds... 00:12:43.279 00:12:43.279 Latency(us) 00:12:43.279 [2024-12-06T21:35:03.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.279 [2024-12-06T21:35:03.776Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.279 Verification LBA range: start 0x0 length 0x1000 00:12:43.279 Malloc0 : 5.17 1632.22 6.38 0.00 0.00 78052.23 2025.66 136314.88 00:12:43.279 [2024-12-06T21:35:03.776Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.279 Verification LBA range: start 0x1000 length 0x1000 00:12:43.279 Malloc0 : 5.16 1624.59 6.35 0.00 0.00 77485.28 2189.50 112483.61 00:12:43.279 [2024-12-06T21:35:03.777Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x800 00:12:43.280 Malloc1p0 : 5.17 1113.70 4.35 0.00 0.00 114287.09 3649.16 129642.12 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x800 length 0x800 00:12:43.280 Malloc1p0 : 5.17 1115.58 4.36 0.00 0.00 112920.52 3961.95 107240.73 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x800 00:12:43.280 Malloc1p1 : 5.18 1113.38 4.35 0.00 0.00 114152.14 3321.48 126782.37 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x800 length 0x800 00:12:43.280 Malloc1p1 : 5.17 1115.28 4.36 0.00 0.00 112759.36 4110.89 102951.10 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x200 00:12:43.280 Malloc2p0 : 5.18 1113.07 4.35 0.00 0.00 114027.71 3619.37 122969.37 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x200 length 0x200 00:12:43.280 Malloc2p0 : 5.17 1114.97 4.36 0.00 0.00 112596.84 4200.26 99138.09 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x200 00:12:43.280 Malloc2p1 : 5.18 1112.74 4.35 0.00 0.00 113890.39 3813.00 119632.99 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x200 length 0x200 00:12:43.280 Malloc2p1 : 5.17 1114.66 4.35 0.00 0.00 112448.37 3902.37 95325.09 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x200 00:12:43.280 Malloc2p2 : 5.18 1112.46 4.35 0.00 0.00 113756.90 3723.64 115819.99 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x200 length 0x200 00:12:43.280 Malloc2p2 : 5.17 1114.36 4.35 0.00 0.00 112280.86 3902.37 91512.09 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x200 00:12:43.280 Malloc2p3 : 5.18 1111.89 4.34 0.00 0.00 113633.86 3425.75 112960.23 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x200 length 0x200 00:12:43.280 Malloc2p3 : 5.18 1128.23 4.41 0.00 0.00 111111.52 3991.74 88652.33 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x0 length 0x200 00:12:43.280 Malloc2p4 : 5.19 1111.28 4.34 0.00 0.00 113504.39 3589.59 109623.85 00:12:43.280 [2024-12-06T21:35:03.777Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.280 Verification LBA range: start 0x200 length 0x200 00:12:43.280 Malloc2p4 : 5.18 1127.68 4.40 0.00 0.00 110956.61 3902.37 85792.58 00:12:43.280 [2024-12-06T21:35:04.068Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x200 00:12:43.571 Malloc2p5 : 5.19 1110.60 4.34 0.00 0.00 113371.29 4259.84 105810.85 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x200 length 0x200 00:12:43.571 Malloc2p5 : 5.19 1127.08 4.40 0.00 0.00 110830.78 3991.74 87222.46 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x200 00:12:43.571 Malloc2p6 : 5.19 1110.01 4.34 0.00 0.00 113209.69 4170.47 101521.22 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x200 length 0x200 00:12:43.571 Malloc2p6 : 5.19 1126.43 4.40 0.00 0.00 110685.12 3559.80 88175.71 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x200 00:12:43.571 Malloc2p7 : 5.19 1109.73 4.33 0.00 0.00 113036.66 4200.26 97708.22 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x200 length 0x200 00:12:43.571 Malloc2p7 : 5.16 1117.63 4.37 0.00 0.00 113919.90 3813.00 128688.87 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x1000 00:12:43.571 TestPT : 5.19 1109.45 4.33 0.00 0.00 112873.68 4200.26 93895.21 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x1000 length 0x1000 00:12:43.571 TestPT : 5.16 1103.56 4.31 0.00 0.00 115125.50 5510.98 128688.87 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x2000 00:12:43.571 raid0 : 5.20 1109.18 4.33 0.00 0.00 112715.63 4200.26 90558.84 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x2000 length 0x2000 00:12:43.571 raid0 : 5.16 1117.01 4.36 0.00 0.00 113634.91 3813.00 122969.37 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x2000 00:12:43.571 concat0 : 5.20 1108.92 4.33 0.00 0.00 112549.14 4259.84 87222.46 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x2000 length 0x2000 00:12:43.571 concat0 : 5.16 1116.71 4.36 0.00 0.00 113499.46 3693.85 119632.99 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x1000 00:12:43.571 raid1 : 5.20 1108.63 4.33 0.00 0.00 112373.39 4885.41 88175.71 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x1000 length 0x1000 00:12:43.571 raid1 : 5.16 1116.39 4.36 0.00 0.00 113340.69 4259.84 115819.99 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x0 length 0x4e2 00:12:43.571 AIO0 : 5.20 1108.00 4.33 0.00 0.00 112260.49 4289.63 89128.96 00:12:43.571 [2024-12-06T21:35:04.068Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:43.571 Verification LBA range: start 0x4e2 length 0x4e2 00:12:43.571 AIO0 : 5.16 1115.94 4.36 0.00 0.00 113171.60 4081.11 111530.36 00:12:43.571 [2024-12-06T21:35:04.068Z] =================================================================================================================== 00:12:43.571 [2024-12-06T21:35:04.068Z] Total : 36691.35 143.33 0.00 0.00 109841.69 2025.66 136314.88 00:12:45.501 00:12:45.501 real 0m8.714s 00:12:45.501 user 0m15.687s 00:12:45.501 sys 0m0.577s 00:12:45.501 21:35:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:45.501 21:35:05 -- common/autotest_common.sh@10 -- # set +x 00:12:45.501 ************************************ 00:12:45.501 END TEST bdev_verify 00:12:45.501 ************************************ 00:12:45.502 21:35:05 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:45.502 21:35:05 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:12:45.502 21:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:45.502 21:35:05 -- common/autotest_common.sh@10 -- # set +x 00:12:45.502 ************************************ 00:12:45.502 START TEST bdev_verify_big_io 00:12:45.502 ************************************ 00:12:45.502 21:35:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:45.502 [2024-12-06 21:35:05.894136] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:45.502 [2024-12-06 21:35:05.894306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67266 ] 00:12:45.760 [2024-12-06 21:35:06.063414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:45.760 [2024-12-06 21:35:06.237639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.760 [2024-12-06 21:35:06.237658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.326 [2024-12-06 21:35:06.572112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.326 [2024-12-06 21:35:06.572205] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:46.326 [2024-12-06 21:35:06.580071] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.326 [2024-12-06 21:35:06.580125] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:46.326 [2024-12-06 21:35:06.588093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.326 [2024-12-06 21:35:06.588160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:46.326 [2024-12-06 21:35:06.588177] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:46.326 [2024-12-06 21:35:06.753162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:46.326 [2024-12-06 21:35:06.753260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:46.326 [2024-12-06 21:35:06.753287] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:46.326 [2024-12-06 21:35:06.753300] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:46.326 [2024-12-06 21:35:06.755895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:46.326 [2024-12-06 21:35:06.755957] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:46.584 [2024-12-06 21:35:07.067683] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:46.584 [2024-12-06 21:35:07.070682] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:46.584 [2024-12-06 21:35:07.074043] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:46.584 [2024-12-06 21:35:07.077455] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:46.584 [2024-12-06 21:35:07.080679] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.084417] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.087359] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.090689] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.093654] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.097005] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.099927] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.103169] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.106139] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.109548] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.112861] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.115851] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:46.843 [2024-12-06 21:35:07.187984] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:46.843 [2024-12-06 21:35:07.193890] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:46.843 Running I/O for 5 seconds... 00:12:53.398 00:12:53.398 Latency(us) 00:12:53.398 [2024-12-06T21:35:13.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:53.398 [2024-12-06T21:35:13.895Z] Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.398 Verification LBA range: start 0x0 length 0x100 00:12:53.398 Malloc0 : 5.78 279.70 17.48 0.00 0.00 445074.06 32648.84 1212535.16 00:12:53.398 [2024-12-06T21:35:13.895Z] Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.398 Verification LBA range: start 0x100 length 0x100 00:12:53.398 Malloc0 : 5.73 305.38 19.09 0.00 0.00 412089.92 27405.96 1288795.23 00:12:53.398 [2024-12-06T21:35:13.896Z] Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x80 00:12:53.399 Malloc1p0 : 6.09 98.34 6.15 0.00 0.00 1229363.66 59101.56 2531834.41 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x80 length 0x80 00:12:53.399 Malloc1p0 : 5.73 240.33 15.02 0.00 0.00 517591.10 48139.17 842673.80 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x80 00:12:53.399 Malloc1p1 : 6.09 98.32 6.14 0.00 0.00 1200386.78 56956.74 2516582.40 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x80 length 0x80 00:12:53.399 Malloc1p1 : 5.91 113.07 7.07 0.00 0.00 1067290.51 47900.86 2303054.20 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p0 : 5.79 53.04 3.32 0.00 0.00 554965.33 10366.60 953250.91 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p0 : 5.73 60.89 3.81 0.00 0.00 493147.90 7923.90 747348.71 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p1 : 5.79 53.03 3.31 0.00 0.00 551529.59 10247.45 930372.89 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p1 : 5.73 60.87 3.80 0.00 0.00 490946.41 7923.90 728283.69 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p2 : 5.79 53.02 3.31 0.00 0.00 548321.97 9234.62 911307.87 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p2 : 5.73 60.86 3.80 0.00 0.00 488744.58 8519.68 713031.68 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p3 : 5.79 53.00 3.31 0.00 0.00 545131.05 10843.23 888429.85 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p3 : 5.74 60.85 3.80 0.00 0.00 486458.59 7864.32 697779.67 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p4 : 5.89 55.66 3.48 0.00 0.00 519298.60 11736.90 869364.83 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p4 : 5.74 60.83 3.80 0.00 0.00 484103.37 8281.37 678714.65 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p5 : 5.89 55.65 3.48 0.00 0.00 515923.44 10187.87 850299.81 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p5 : 5.74 60.82 3.80 0.00 0.00 481695.92 8460.10 663462.63 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p6 : 5.90 55.64 3.48 0.00 0.00 513151.15 10902.81 831234.79 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p6 : 5.74 60.81 3.80 0.00 0.00 479550.16 9115.46 644397.61 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x20 00:12:53.399 Malloc2p7 : 5.90 55.62 3.48 0.00 0.00 510170.15 11081.54 808356.77 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x20 length 0x20 00:12:53.399 Malloc2p7 : 5.74 60.79 3.80 0.00 0.00 477229.60 8817.57 629145.60 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x100 00:12:53.399 TestPT : 6.20 102.15 6.38 0.00 0.00 1080227.54 63867.81 2501330.39 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x100 length 0x100 00:12:53.399 TestPT : 5.89 109.03 6.81 0.00 0.00 1045656.11 61008.06 2257298.15 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x200 00:12:53.399 raid0 : 6.02 111.31 6.96 0.00 0.00 984318.89 55765.18 2486078.37 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x200 length 0x200 00:12:53.399 raid0 : 5.95 116.87 7.30 0.00 0.00 963177.41 50522.30 2287802.18 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x200 00:12:53.399 concat0 : 6.06 114.72 7.17 0.00 0.00 933188.75 45517.73 2486078.37 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x200 length 0x200 00:12:53.399 concat0 : 5.95 116.85 7.30 0.00 0.00 942513.98 50045.67 2287802.18 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x100 00:12:53.399 raid1 : 6.12 197.79 12.36 0.00 0.00 532998.18 17873.45 2242046.14 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x100 length 0x100 00:12:53.399 raid1 : 5.95 133.70 8.36 0.00 0.00 815792.89 21328.99 2303054.20 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x0 length 0x4e 00:12:53.399 AIO0 : 6.16 159.20 9.95 0.00 0.00 396942.50 1541.59 1456567.39 00:12:53.399 [2024-12-06T21:35:13.896Z] Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:53.399 Verification LBA range: start 0x4e length 0x4e 00:12:53.399 AIO0 : 5.95 143.65 8.98 0.00 0.00 457906.31 1362.85 1319299.26 00:12:53.399 [2024-12-06T21:35:13.896Z] =================================================================================================================== 00:12:53.399 [2024-12-06T21:35:13.896Z] Total : 3361.79 210.11 0.00 0.00 660651.55 1362.85 2531834.41 00:12:55.299 00:12:55.299 real 0m9.922s 00:12:55.299 user 0m18.311s 00:12:55.299 sys 0m0.549s 00:12:55.299 21:35:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:55.299 21:35:15 -- common/autotest_common.sh@10 -- # set +x 00:12:55.299 ************************************ 00:12:55.299 END TEST bdev_verify_big_io 00:12:55.299 ************************************ 00:12:55.299 21:35:15 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.299 21:35:15 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:55.299 21:35:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.299 21:35:15 -- common/autotest_common.sh@10 -- # set +x 00:12:55.557 ************************************ 00:12:55.557 START TEST bdev_write_zeroes 00:12:55.557 ************************************ 00:12:55.557 21:35:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:55.557 [2024-12-06 21:35:15.851171] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:55.557 [2024-12-06 21:35:15.851340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67392 ] 00:12:55.557 [2024-12-06 21:35:16.005206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.816 [2024-12-06 21:35:16.177888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.075 [2024-12-06 21:35:16.501070] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:56.075 [2024-12-06 21:35:16.501165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:56.075 [2024-12-06 21:35:16.509057] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:56.075 [2024-12-06 21:35:16.509123] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:56.075 [2024-12-06 21:35:16.517078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:56.075 [2024-12-06 21:35:16.517137] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:56.075 [2024-12-06 21:35:16.517152] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:56.334 [2024-12-06 21:35:16.687348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:56.334 [2024-12-06 21:35:16.687434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.334 [2024-12-06 21:35:16.687478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009980 00:12:56.334 [2024-12-06 21:35:16.687492] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.334 [2024-12-06 21:35:16.689931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.334 [2024-12-06 21:35:16.690006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:56.593 Running I/O for 1 seconds... 00:12:57.969 00:12:57.969 Latency(us) 00:12:57.969 [2024-12-06T21:35:18.466Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc0 : 1.04 5434.28 21.23 0.00 0.00 23535.37 659.08 37891.72 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc1p0 : 1.04 5426.96 21.20 0.00 0.00 23534.28 737.28 37415.10 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc1p1 : 1.04 5419.98 21.17 0.00 0.00 23525.76 759.62 36700.16 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p0 : 1.04 5412.84 21.14 0.00 0.00 23508.53 718.66 35985.22 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p1 : 1.04 5405.58 21.12 0.00 0.00 23496.22 733.56 35270.28 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p2 : 1.04 5398.36 21.09 0.00 0.00 23485.32 737.28 34555.35 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p3 : 1.04 5391.34 21.06 0.00 0.00 23469.23 718.66 33602.09 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p4 : 1.05 5384.49 21.03 0.00 0.00 23457.00 711.21 32887.16 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p5 : 1.05 5377.82 21.01 0.00 0.00 23446.58 722.39 32410.53 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p6 : 1.05 5371.08 20.98 0.00 0.00 23433.16 767.07 31695.59 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 Malloc2p7 : 1.05 5364.09 20.95 0.00 0.00 23418.68 729.83 30980.65 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 TestPT : 1.05 5357.21 20.93 0.00 0.00 23412.19 755.90 30027.40 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 raid0 : 1.05 5348.92 20.89 0.00 0.00 23387.12 1459.67 28597.53 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 concat0 : 1.05 5341.22 20.86 0.00 0.00 23345.90 1437.32 27167.65 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 raid1 : 1.06 5331.56 20.83 0.00 0.00 23299.24 2353.34 26452.71 00:12:57.969 [2024-12-06T21:35:18.466Z] Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:57.969 AIO0 : 1.06 5418.36 21.17 0.00 0.00 22816.65 558.55 26571.87 00:12:57.969 [2024-12-06T21:35:18.466Z] =================================================================================================================== 00:12:57.969 [2024-12-06T21:35:18.466Z] Total : 86184.11 336.66 0.00 0.00 23409.97 558.55 37891.72 00:12:59.871 00:12:59.872 real 0m4.205s 00:12:59.872 user 0m3.676s 00:12:59.872 sys 0m0.376s 00:12:59.872 21:35:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:59.872 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:12:59.872 ************************************ 00:12:59.872 END TEST bdev_write_zeroes 00:12:59.872 ************************************ 00:12:59.872 21:35:20 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.872 21:35:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:12:59.872 21:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:59.872 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:12:59.872 ************************************ 00:12:59.872 START TEST bdev_json_nonenclosed 00:12:59.872 ************************************ 00:12:59.872 21:35:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:59.872 [2024-12-06 21:35:20.120323] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:59.872 [2024-12-06 21:35:20.120516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67455 ] 00:12:59.872 [2024-12-06 21:35:20.293533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.129 [2024-12-06 21:35:20.513985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.129 [2024-12-06 21:35:20.514200] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:00.129 [2024-12-06 21:35:20.514225] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:00.695 00:13:00.695 real 0m0.841s 00:13:00.695 user 0m0.613s 00:13:00.695 sys 0m0.128s 00:13:00.695 21:35:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:00.695 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 ************************************ 00:13:00.695 END TEST bdev_json_nonenclosed 00:13:00.695 ************************************ 00:13:00.695 21:35:20 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.695 21:35:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:00.695 21:35:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:00.695 21:35:20 -- common/autotest_common.sh@10 -- # set +x 00:13:00.695 ************************************ 00:13:00.695 START TEST bdev_json_nonarray 00:13:00.695 ************************************ 00:13:00.695 21:35:20 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:00.695 [2024-12-06 21:35:21.014687] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:00.695 [2024-12-06 21:35:21.014854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67482 ] 00:13:00.695 [2024-12-06 21:35:21.184018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.953 [2024-12-06 21:35:21.365732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.953 [2024-12-06 21:35:21.365977] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:00.954 [2024-12-06 21:35:21.366021] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:01.522 00:13:01.522 real 0m0.799s 00:13:01.522 user 0m0.583s 00:13:01.522 sys 0m0.115s 00:13:01.522 21:35:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:01.522 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:13:01.522 ************************************ 00:13:01.522 END TEST bdev_json_nonarray 00:13:01.522 ************************************ 00:13:01.522 21:35:21 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:13:01.522 21:35:21 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:13:01.522 21:35:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:01.522 21:35:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:01.522 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:13:01.522 ************************************ 00:13:01.522 START TEST bdev_qos 00:13:01.522 ************************************ 00:13:01.522 21:35:21 -- common/autotest_common.sh@1114 -- # qos_test_suite '' 00:13:01.522 21:35:21 -- bdev/blockdev.sh@444 -- # QOS_PID=67507 00:13:01.522 Process qos testing pid: 67507 00:13:01.522 21:35:21 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 67507' 00:13:01.522 21:35:21 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:13:01.522 21:35:21 -- bdev/blockdev.sh@447 -- # waitforlisten 67507 00:13:01.522 21:35:21 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:13:01.522 21:35:21 -- common/autotest_common.sh@829 -- # '[' -z 67507 ']' 00:13:01.522 21:35:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.522 21:35:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.522 21:35:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.522 21:35:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.522 21:35:21 -- common/autotest_common.sh@10 -- # set +x 00:13:01.523 [2024-12-06 21:35:21.861196] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:01.523 [2024-12-06 21:35:21.861587] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67507 ] 00:13:01.782 [2024-12-06 21:35:22.025345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.782 [2024-12-06 21:35:22.256648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.348 21:35:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.348 21:35:22 -- common/autotest_common.sh@862 -- # return 0 00:13:02.348 21:35:22 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:13:02.348 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.348 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.606 Malloc_0 00:13:02.606 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.606 21:35:22 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:13:02.606 21:35:22 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:13:02.606 21:35:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.606 21:35:22 -- common/autotest_common.sh@899 -- # local i 00:13:02.606 21:35:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.606 21:35:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.606 21:35:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.606 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.606 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.606 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.606 21:35:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:13:02.606 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.606 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.606 [ 00:13:02.606 { 00:13:02.606 "name": "Malloc_0", 00:13:02.606 "aliases": [ 00:13:02.606 "f1389765-e11e-4f4e-b23d-3f499526fd29" 00:13:02.606 ], 00:13:02.606 "product_name": "Malloc disk", 00:13:02.606 "block_size": 512, 00:13:02.606 "num_blocks": 262144, 00:13:02.606 "uuid": "f1389765-e11e-4f4e-b23d-3f499526fd29", 00:13:02.606 "assigned_rate_limits": { 00:13:02.606 "rw_ios_per_sec": 0, 00:13:02.606 "rw_mbytes_per_sec": 0, 00:13:02.606 "r_mbytes_per_sec": 0, 00:13:02.606 "w_mbytes_per_sec": 0 00:13:02.606 }, 00:13:02.606 "claimed": false, 00:13:02.606 "zoned": false, 00:13:02.606 "supported_io_types": { 00:13:02.606 "read": true, 00:13:02.606 "write": true, 00:13:02.606 "unmap": true, 00:13:02.606 "write_zeroes": true, 00:13:02.606 "flush": true, 00:13:02.606 "reset": true, 00:13:02.606 "compare": false, 00:13:02.606 "compare_and_write": false, 00:13:02.606 "abort": true, 00:13:02.606 "nvme_admin": false, 00:13:02.606 "nvme_io": false 00:13:02.606 }, 00:13:02.606 "memory_domains": [ 00:13:02.606 { 00:13:02.606 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.606 "dma_device_type": 2 00:13:02.606 } 00:13:02.606 ], 00:13:02.606 "driver_specific": {} 00:13:02.606 } 00:13:02.606 ] 00:13:02.606 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.606 21:35:22 -- common/autotest_common.sh@905 -- # return 0 00:13:02.606 21:35:22 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:13:02.606 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.606 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.606 Null_1 00:13:02.606 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.606 21:35:22 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:13:02.606 21:35:22 -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:13:02.606 21:35:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.606 21:35:22 -- common/autotest_common.sh@899 -- # local i 00:13:02.606 21:35:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.606 21:35:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.606 21:35:22 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:02.606 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.607 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.607 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.607 21:35:22 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:13:02.607 21:35:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.607 21:35:22 -- common/autotest_common.sh@10 -- # set +x 00:13:02.607 [ 00:13:02.607 { 00:13:02.607 "name": "Null_1", 00:13:02.607 "aliases": [ 00:13:02.607 "5fe78a79-48c5-4d8b-ab4b-64b96ec753a8" 00:13:02.607 ], 00:13:02.607 "product_name": "Null disk", 00:13:02.607 "block_size": 512, 00:13:02.607 "num_blocks": 262144, 00:13:02.607 "uuid": "5fe78a79-48c5-4d8b-ab4b-64b96ec753a8", 00:13:02.607 "assigned_rate_limits": { 00:13:02.607 "rw_ios_per_sec": 0, 00:13:02.607 "rw_mbytes_per_sec": 0, 00:13:02.607 "r_mbytes_per_sec": 0, 00:13:02.607 "w_mbytes_per_sec": 0 00:13:02.607 }, 00:13:02.607 "claimed": false, 00:13:02.607 "zoned": false, 00:13:02.607 "supported_io_types": { 00:13:02.607 "read": true, 00:13:02.607 "write": true, 00:13:02.607 "unmap": false, 00:13:02.607 "write_zeroes": true, 00:13:02.607 "flush": false, 00:13:02.607 "reset": true, 00:13:02.607 "compare": false, 00:13:02.607 "compare_and_write": false, 00:13:02.607 "abort": true, 00:13:02.607 "nvme_admin": false, 00:13:02.607 "nvme_io": false 00:13:02.607 }, 00:13:02.607 "driver_specific": {} 00:13:02.607 } 00:13:02.607 ] 00:13:02.607 21:35:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.607 21:35:22 -- common/autotest_common.sh@905 -- # return 0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@455 -- # qos_function_test 00:13:02.607 21:35:22 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:13:02.607 21:35:22 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:13:02.607 21:35:22 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:02.607 21:35:22 -- bdev/blockdev.sh@410 -- # local io_result=0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:02.607 21:35:22 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:02.607 21:35:22 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:02.607 21:35:22 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:02.607 21:35:22 -- bdev/blockdev.sh@376 -- # tail -1 00:13:02.607 Running I/O for 60 seconds... 00:13:07.875 21:35:28 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 67688.07 270752.27 0.00 0.00 274432.00 0.00 0.00 ' 00:13:07.875 21:35:28 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:07.875 21:35:28 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:07.875 21:35:28 -- bdev/blockdev.sh@378 -- # iostat_result=67688.07 00:13:07.875 21:35:28 -- bdev/blockdev.sh@383 -- # echo 67688 00:13:07.875 21:35:28 -- bdev/blockdev.sh@414 -- # io_result=67688 00:13:07.875 21:35:28 -- bdev/blockdev.sh@416 -- # iops_limit=16000 00:13:07.875 21:35:28 -- bdev/blockdev.sh@417 -- # '[' 16000 -gt 1000 ']' 00:13:07.875 21:35:28 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0 00:13:07.875 21:35:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.875 21:35:28 -- common/autotest_common.sh@10 -- # set +x 00:13:07.875 21:35:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.875 21:35:28 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0 00:13:07.875 21:35:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:07.875 21:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:07.875 21:35:28 -- common/autotest_common.sh@10 -- # set +x 00:13:07.875 ************************************ 00:13:07.875 START TEST bdev_qos_iops 00:13:07.875 ************************************ 00:13:07.875 21:35:28 -- common/autotest_common.sh@1114 -- # run_qos_test 16000 IOPS Malloc_0 00:13:07.875 21:35:28 -- bdev/blockdev.sh@387 -- # local qos_limit=16000 00:13:07.875 21:35:28 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:07.875 21:35:28 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:13:07.875 21:35:28 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:13:07.875 21:35:28 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:07.875 21:35:28 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:07.875 21:35:28 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:07.875 21:35:28 -- bdev/blockdev.sh@376 -- # tail -1 00:13:07.875 21:35:28 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:13.141 21:35:33 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 16025.19 64100.74 0.00 0.00 65152.00 0.00 0.00 ' 00:13:13.141 21:35:33 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:13:13.141 21:35:33 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:13:13.141 21:35:33 -- bdev/blockdev.sh@378 -- # iostat_result=16025.19 00:13:13.141 21:35:33 -- bdev/blockdev.sh@383 -- # echo 16025 00:13:13.141 21:35:33 -- bdev/blockdev.sh@390 -- # qos_result=16025 00:13:13.141 21:35:33 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:13:13.141 21:35:33 -- bdev/blockdev.sh@394 -- # lower_limit=14400 00:13:13.141 21:35:33 -- bdev/blockdev.sh@395 -- # upper_limit=17600 00:13:13.141 21:35:33 -- bdev/blockdev.sh@398 -- # '[' 16025 -lt 14400 ']' 00:13:13.141 21:35:33 -- bdev/blockdev.sh@398 -- # '[' 16025 -gt 17600 ']' 00:13:13.141 00:13:13.141 real 0m5.230s 00:13:13.141 user 0m0.125s 00:13:13.141 sys 0m0.038s 00:13:13.141 21:35:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:13.141 21:35:33 -- common/autotest_common.sh@10 -- # set +x 00:13:13.141 ************************************ 00:13:13.141 END TEST bdev_qos_iops 00:13:13.141 ************************************ 00:13:13.141 21:35:33 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:13:13.141 21:35:33 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:13.141 21:35:33 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:13.141 21:35:33 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:13.141 21:35:33 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:13.141 21:35:33 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:13.141 21:35:33 -- bdev/blockdev.sh@376 -- # tail -1 00:13:18.476 21:35:38 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 25786.09 103144.35 0.00 0.00 104448.00 0.00 0.00 ' 00:13:18.476 21:35:38 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:18.476 21:35:38 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:18.476 21:35:38 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:18.476 21:35:38 -- bdev/blockdev.sh@380 -- # iostat_result=104448.00 00:13:18.476 21:35:38 -- bdev/blockdev.sh@383 -- # echo 104448 00:13:18.476 21:35:38 -- bdev/blockdev.sh@425 -- # bw_limit=104448 00:13:18.476 21:35:38 -- bdev/blockdev.sh@426 -- # bw_limit=10 00:13:18.476 21:35:38 -- bdev/blockdev.sh@427 -- # '[' 10 -lt 2 ']' 00:13:18.476 21:35:38 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1 00:13:18.476 21:35:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.476 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:13:18.476 21:35:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.476 21:35:38 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1 00:13:18.476 21:35:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:18.476 21:35:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.476 21:35:38 -- common/autotest_common.sh@10 -- # set +x 00:13:18.476 ************************************ 00:13:18.476 START TEST bdev_qos_bw 00:13:18.476 ************************************ 00:13:18.476 21:35:38 -- common/autotest_common.sh@1114 -- # run_qos_test 10 BANDWIDTH Null_1 00:13:18.476 21:35:38 -- bdev/blockdev.sh@387 -- # local qos_limit=10 00:13:18.476 21:35:38 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:18.476 21:35:38 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:18.476 21:35:38 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:18.476 21:35:38 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:18.476 21:35:38 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:18.476 21:35:38 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:18.476 21:35:38 -- bdev/blockdev.sh@376 -- # tail -1 00:13:18.476 21:35:38 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:23.744 21:35:43 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2561.88 10247.52 0.00 0.00 10528.00 0.00 0.00 ' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@380 -- # iostat_result=10528.00 00:13:23.744 21:35:43 -- bdev/blockdev.sh@383 -- # echo 10528 00:13:23.744 21:35:43 -- bdev/blockdev.sh@390 -- # qos_result=10528 00:13:23.744 21:35:43 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@392 -- # qos_limit=10240 00:13:23.744 21:35:43 -- bdev/blockdev.sh@394 -- # lower_limit=9216 00:13:23.744 21:35:43 -- bdev/blockdev.sh@395 -- # upper_limit=11264 00:13:23.744 21:35:43 -- bdev/blockdev.sh@398 -- # '[' 10528 -lt 9216 ']' 00:13:23.744 21:35:43 -- bdev/blockdev.sh@398 -- # '[' 10528 -gt 11264 ']' 00:13:23.744 00:13:23.744 real 0m5.272s 00:13:23.744 user 0m0.128s 00:13:23.744 sys 0m0.037s 00:13:23.744 21:35:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.744 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:13:23.744 ************************************ 00:13:23.744 END TEST bdev_qos_bw 00:13:23.744 ************************************ 00:13:23.744 21:35:43 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:23.744 21:35:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.744 21:35:43 -- common/autotest_common.sh@10 -- # set +x 00:13:23.744 21:35:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.744 21:35:44 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.744 21:35:44 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:23.744 21:35:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.744 21:35:44 -- common/autotest_common.sh@10 -- # set +x 00:13:23.744 ************************************ 00:13:23.744 START TEST bdev_qos_ro_bw 00:13:23.744 ************************************ 00:13:23.744 21:35:44 -- common/autotest_common.sh@1114 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:23.744 21:35:44 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:23.744 21:35:44 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:23.744 21:35:44 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:23.744 21:35:44 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:23.744 21:35:44 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:23.744 21:35:44 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:23.744 21:35:44 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:23.744 21:35:44 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:23.744 21:35:44 -- bdev/blockdev.sh@376 -- # tail -1 00:13:29.014 21:35:49 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.09 2048.36 0.00 0.00 2064.00 0.00 0.00 ' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@380 -- # iostat_result=2064.00 00:13:29.014 21:35:49 -- bdev/blockdev.sh@383 -- # echo 2064 00:13:29.014 21:35:49 -- bdev/blockdev.sh@390 -- # qos_result=2064 00:13:29.014 21:35:49 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:29.014 21:35:49 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:29.014 21:35:49 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:29.014 21:35:49 -- bdev/blockdev.sh@398 -- # '[' 2064 -lt 1843 ']' 00:13:29.014 21:35:49 -- bdev/blockdev.sh@398 -- # '[' 2064 -gt 2252 ']' 00:13:29.014 00:13:29.014 real 0m5.188s 00:13:29.014 user 0m0.123s 00:13:29.014 sys 0m0.037s 00:13:29.014 21:35:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:29.015 21:35:49 -- common/autotest_common.sh@10 -- # set +x 00:13:29.015 ************************************ 00:13:29.015 END TEST bdev_qos_ro_bw 00:13:29.015 ************************************ 00:13:29.015 21:35:49 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:29.015 21:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.015 21:35:49 -- common/autotest_common.sh@10 -- # set +x 00:13:29.582 21:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.582 21:35:49 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:29.582 21:35:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.582 21:35:49 -- common/autotest_common.sh@10 -- # set +x 00:13:29.582 00:13:29.582 Latency(us) 00:13:29.582 [2024-12-06T21:35:50.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.582 [2024-12-06T21:35:50.079Z] Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.582 Malloc_0 : 26.72 22407.98 87.53 0.00 0.00 11318.70 2115.03 503316.48 00:13:29.582 [2024-12-06T21:35:50.079Z] Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:29.582 Null_1 : 26.92 23713.39 92.63 0.00 0.00 10772.29 636.74 195416.44 00:13:29.582 [2024-12-06T21:35:50.079Z] =================================================================================================================== 00:13:29.582 [2024-12-06T21:35:50.079Z] Total : 46121.37 180.16 0.00 0.00 11036.76 636.74 503316.48 00:13:29.582 0 00:13:29.582 21:35:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.582 21:35:49 -- bdev/blockdev.sh@459 -- # killprocess 67507 00:13:29.582 21:35:49 -- common/autotest_common.sh@936 -- # '[' -z 67507 ']' 00:13:29.582 21:35:49 -- common/autotest_common.sh@940 -- # kill -0 67507 00:13:29.582 21:35:49 -- common/autotest_common.sh@941 -- # uname 00:13:29.582 21:35:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:29.582 21:35:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67507 00:13:29.582 21:35:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:29.582 killing process with pid 67507 00:13:29.582 21:35:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:29.582 21:35:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67507' 00:13:29.582 Received shutdown signal, test time was about 26.960372 seconds 00:13:29.582 00:13:29.582 Latency(us) 00:13:29.582 [2024-12-06T21:35:50.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.582 [2024-12-06T21:35:50.079Z] =================================================================================================================== 00:13:29.582 [2024-12-06T21:35:50.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.582 21:35:50 -- common/autotest_common.sh@955 -- # kill 67507 00:13:29.582 21:35:50 -- common/autotest_common.sh@960 -- # wait 67507 00:13:30.960 21:35:51 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:30.960 00:13:30.960 real 0m29.503s 00:13:30.960 user 0m30.311s 00:13:30.960 sys 0m0.642s 00:13:30.960 21:35:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:30.960 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.960 ************************************ 00:13:30.960 END TEST bdev_qos 00:13:30.960 ************************************ 00:13:30.960 21:35:51 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:30.960 21:35:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:30.960 21:35:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.960 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.960 ************************************ 00:13:30.960 START TEST bdev_qd_sampling 00:13:30.960 ************************************ 00:13:30.960 21:35:51 -- common/autotest_common.sh@1114 -- # qd_sampling_test_suite '' 00:13:30.960 21:35:51 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:30.960 21:35:51 -- bdev/blockdev.sh@539 -- # QD_PID=67926 00:13:30.960 21:35:51 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:30.960 Process bdev QD sampling period testing pid: 67926 00:13:30.960 21:35:51 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 67926' 00:13:30.960 21:35:51 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.960 21:35:51 -- bdev/blockdev.sh@542 -- # waitforlisten 67926 00:13:30.960 21:35:51 -- common/autotest_common.sh@829 -- # '[' -z 67926 ']' 00:13:30.960 21:35:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.960 21:35:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:30.960 21:35:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.960 21:35:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:30.960 21:35:51 -- common/autotest_common.sh@10 -- # set +x 00:13:30.960 [2024-12-06 21:35:51.430224] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:30.960 [2024-12-06 21:35:51.430395] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67926 ] 00:13:31.220 [2024-12-06 21:35:51.598669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.479 [2024-12-06 21:35:51.825538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.479 [2024-12-06 21:35:51.825544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.046 21:35:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.046 21:35:52 -- common/autotest_common.sh@862 -- # return 0 00:13:32.046 21:35:52 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:32.046 21:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.046 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:13:32.046 Malloc_QD 00:13:32.046 21:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.046 21:35:52 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:32.046 21:35:52 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:13:32.046 21:35:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:32.046 21:35:52 -- common/autotest_common.sh@899 -- # local i 00:13:32.046 21:35:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:32.046 21:35:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:32.046 21:35:52 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:32.046 21:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.046 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:13:32.046 21:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.046 21:35:52 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:32.046 21:35:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.046 21:35:52 -- common/autotest_common.sh@10 -- # set +x 00:13:32.046 [ 00:13:32.046 { 00:13:32.046 "name": "Malloc_QD", 00:13:32.046 "aliases": [ 00:13:32.046 "1219ef78-909c-44e6-8867-90adb899fe00" 00:13:32.046 ], 00:13:32.046 "product_name": "Malloc disk", 00:13:32.046 "block_size": 512, 00:13:32.046 "num_blocks": 262144, 00:13:32.046 "uuid": "1219ef78-909c-44e6-8867-90adb899fe00", 00:13:32.046 "assigned_rate_limits": { 00:13:32.046 "rw_ios_per_sec": 0, 00:13:32.046 "rw_mbytes_per_sec": 0, 00:13:32.046 "r_mbytes_per_sec": 0, 00:13:32.046 "w_mbytes_per_sec": 0 00:13:32.046 }, 00:13:32.046 "claimed": false, 00:13:32.046 "zoned": false, 00:13:32.046 "supported_io_types": { 00:13:32.046 "read": true, 00:13:32.046 "write": true, 00:13:32.046 "unmap": true, 00:13:32.046 "write_zeroes": true, 00:13:32.046 "flush": true, 00:13:32.046 "reset": true, 00:13:32.046 "compare": false, 00:13:32.046 "compare_and_write": false, 00:13:32.046 "abort": true, 00:13:32.046 "nvme_admin": false, 00:13:32.046 "nvme_io": false 00:13:32.046 }, 00:13:32.046 "memory_domains": [ 00:13:32.046 { 00:13:32.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:32.046 "dma_device_type": 2 00:13:32.046 } 00:13:32.046 ], 00:13:32.046 "driver_specific": {} 00:13:32.046 } 00:13:32.046 ] 00:13:32.046 21:35:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.046 21:35:52 -- common/autotest_common.sh@905 -- # return 0 00:13:32.046 21:35:52 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:32.046 21:35:52 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:32.305 Running I/O for 5 seconds... 00:13:34.203 21:35:54 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:34.203 21:35:54 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:34.203 21:35:54 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:34.203 21:35:54 -- bdev/blockdev.sh@519 -- # local iostats 00:13:34.203 21:35:54 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:34.203 21:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.203 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 21:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.203 21:35:54 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:34.203 21:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.203 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:13:34.203 21:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.203 21:35:54 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:34.203 "tick_rate": 2200000000, 00:13:34.203 "ticks": 1725759432647, 00:13:34.203 "bdevs": [ 00:13:34.203 { 00:13:34.203 "name": "Malloc_QD", 00:13:34.203 "bytes_read": 816878080, 00:13:34.203 "num_read_ops": 199427, 00:13:34.203 "bytes_written": 0, 00:13:34.203 "num_write_ops": 0, 00:13:34.203 "bytes_unmapped": 0, 00:13:34.203 "num_unmap_ops": 0, 00:13:34.203 "bytes_copied": 0, 00:13:34.203 "num_copy_ops": 0, 00:13:34.204 "read_latency_ticks": 2133299923635, 00:13:34.204 "max_read_latency_ticks": 12029499, 00:13:34.204 "min_read_latency_ticks": 334843, 00:13:34.204 "write_latency_ticks": 0, 00:13:34.204 "max_write_latency_ticks": 0, 00:13:34.204 "min_write_latency_ticks": 0, 00:13:34.204 "unmap_latency_ticks": 0, 00:13:34.204 "max_unmap_latency_ticks": 0, 00:13:34.204 "min_unmap_latency_ticks": 0, 00:13:34.204 "copy_latency_ticks": 0, 00:13:34.204 "max_copy_latency_ticks": 0, 00:13:34.204 "min_copy_latency_ticks": 0, 00:13:34.204 "io_error": {}, 00:13:34.204 "queue_depth_polling_period": 10, 00:13:34.204 "queue_depth": 512, 00:13:34.204 "io_time": 20, 00:13:34.204 "weighted_io_time": 10240 00:13:34.204 } 00:13:34.204 ] 00:13:34.204 }' 00:13:34.204 21:35:54 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:34.204 21:35:54 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:34.204 21:35:54 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:34.204 21:35:54 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:34.204 21:35:54 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:34.204 21:35:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.204 21:35:54 -- common/autotest_common.sh@10 -- # set +x 00:13:34.204 00:13:34.204 Latency(us) 00:13:34.204 [2024-12-06T21:35:54.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.204 [2024-12-06T21:35:54.701Z] Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:34.204 Malloc_QD : 1.94 52453.09 204.89 0.00 0.00 4868.00 1519.24 5510.98 00:13:34.204 [2024-12-06T21:35:54.701Z] Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:34.204 Malloc_QD : 1.94 52573.65 205.37 0.00 0.00 4857.48 1236.25 5481.19 00:13:34.204 [2024-12-06T21:35:54.701Z] =================================================================================================================== 00:13:34.204 [2024-12-06T21:35:54.701Z] Total : 105026.74 410.26 0.00 0.00 4862.73 1236.25 5510.98 00:13:34.204 0 00:13:34.204 21:35:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.204 21:35:54 -- bdev/blockdev.sh@552 -- # killprocess 67926 00:13:34.204 21:35:54 -- common/autotest_common.sh@936 -- # '[' -z 67926 ']' 00:13:34.204 21:35:54 -- common/autotest_common.sh@940 -- # kill -0 67926 00:13:34.204 21:35:54 -- common/autotest_common.sh@941 -- # uname 00:13:34.204 21:35:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:34.204 21:35:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67926 00:13:34.204 killing process with pid 67926 00:13:34.204 Received shutdown signal, test time was about 2.076237 seconds 00:13:34.204 00:13:34.204 Latency(us) 00:13:34.204 [2024-12-06T21:35:54.701Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.204 [2024-12-06T21:35:54.701Z] =================================================================================================================== 00:13:34.204 [2024-12-06T21:35:54.701Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:34.204 21:35:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:34.204 21:35:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:34.204 21:35:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67926' 00:13:34.204 21:35:54 -- common/autotest_common.sh@955 -- # kill 67926 00:13:34.204 21:35:54 -- common/autotest_common.sh@960 -- # wait 67926 00:13:35.579 21:35:55 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:35.580 00:13:35.580 real 0m4.629s 00:13:35.580 user 0m8.547s 00:13:35.580 sys 0m0.386s 00:13:35.580 21:35:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:35.580 ************************************ 00:13:35.580 END TEST bdev_qd_sampling 00:13:35.580 ************************************ 00:13:35.580 21:35:55 -- common/autotest_common.sh@10 -- # set +x 00:13:35.580 21:35:56 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:35.580 21:35:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:35.580 21:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:35.580 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:13:35.580 ************************************ 00:13:35.580 START TEST bdev_error 00:13:35.580 ************************************ 00:13:35.580 21:35:56 -- common/autotest_common.sh@1114 -- # error_test_suite '' 00:13:35.580 21:35:56 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:35.580 21:35:56 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:35.580 21:35:56 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:35.580 Process error testing pid: 68008 00:13:35.580 21:35:56 -- bdev/blockdev.sh@470 -- # ERR_PID=68008 00:13:35.580 21:35:56 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 68008' 00:13:35.580 21:35:56 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:35.580 21:35:56 -- bdev/blockdev.sh@472 -- # waitforlisten 68008 00:13:35.580 21:35:56 -- common/autotest_common.sh@829 -- # '[' -z 68008 ']' 00:13:35.580 21:35:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.580 21:35:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:35.580 21:35:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.580 21:35:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:35.580 21:35:56 -- common/autotest_common.sh@10 -- # set +x 00:13:35.837 [2024-12-06 21:35:56.107485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:35.837 [2024-12-06 21:35:56.107622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68008 ] 00:13:35.837 [2024-12-06 21:35:56.270226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.094 [2024-12-06 21:35:56.444136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.656 21:35:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:36.656 21:35:57 -- common/autotest_common.sh@862 -- # return 0 00:13:36.656 21:35:57 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:36.656 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.656 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.656 Dev_1 00:13:36.656 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.656 21:35:57 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:36.656 21:35:57 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:36.656 21:35:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.656 21:35:57 -- common/autotest_common.sh@899 -- # local i 00:13:36.656 21:35:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.656 21:35:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.656 21:35:57 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:36.656 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.656 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 [ 00:13:36.914 { 00:13:36.914 "name": "Dev_1", 00:13:36.914 "aliases": [ 00:13:36.914 "6d0f697c-deac-4822-8c0b-c08c4b3287e3" 00:13:36.914 ], 00:13:36.914 "product_name": "Malloc disk", 00:13:36.914 "block_size": 512, 00:13:36.914 "num_blocks": 262144, 00:13:36.914 "uuid": "6d0f697c-deac-4822-8c0b-c08c4b3287e3", 00:13:36.914 "assigned_rate_limits": { 00:13:36.914 "rw_ios_per_sec": 0, 00:13:36.914 "rw_mbytes_per_sec": 0, 00:13:36.914 "r_mbytes_per_sec": 0, 00:13:36.914 "w_mbytes_per_sec": 0 00:13:36.914 }, 00:13:36.914 "claimed": false, 00:13:36.914 "zoned": false, 00:13:36.914 "supported_io_types": { 00:13:36.914 "read": true, 00:13:36.914 "write": true, 00:13:36.914 "unmap": true, 00:13:36.914 "write_zeroes": true, 00:13:36.914 "flush": true, 00:13:36.914 "reset": true, 00:13:36.914 "compare": false, 00:13:36.914 "compare_and_write": false, 00:13:36.914 "abort": true, 00:13:36.914 "nvme_admin": false, 00:13:36.914 "nvme_io": false 00:13:36.914 }, 00:13:36.914 "memory_domains": [ 00:13:36.914 { 00:13:36.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.914 "dma_device_type": 2 00:13:36.914 } 00:13:36.914 ], 00:13:36.914 "driver_specific": {} 00:13:36.914 } 00:13:36.914 ] 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- common/autotest_common.sh@905 -- # return 0 00:13:36.914 21:35:57 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 true 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 Dev_2 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:36.914 21:35:57 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:36.914 21:35:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.914 21:35:57 -- common/autotest_common.sh@899 -- # local i 00:13:36.914 21:35:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.914 21:35:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.914 21:35:57 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 [ 00:13:36.914 { 00:13:36.914 "name": "Dev_2", 00:13:36.914 "aliases": [ 00:13:36.914 "b1c140c8-753b-435b-8d7a-50047ff1728b" 00:13:36.914 ], 00:13:36.914 "product_name": "Malloc disk", 00:13:36.914 "block_size": 512, 00:13:36.914 "num_blocks": 262144, 00:13:36.914 "uuid": "b1c140c8-753b-435b-8d7a-50047ff1728b", 00:13:36.914 "assigned_rate_limits": { 00:13:36.914 "rw_ios_per_sec": 0, 00:13:36.914 "rw_mbytes_per_sec": 0, 00:13:36.914 "r_mbytes_per_sec": 0, 00:13:36.914 "w_mbytes_per_sec": 0 00:13:36.914 }, 00:13:36.914 "claimed": false, 00:13:36.914 "zoned": false, 00:13:36.914 "supported_io_types": { 00:13:36.914 "read": true, 00:13:36.914 "write": true, 00:13:36.914 "unmap": true, 00:13:36.914 "write_zeroes": true, 00:13:36.914 "flush": true, 00:13:36.914 "reset": true, 00:13:36.914 "compare": false, 00:13:36.914 "compare_and_write": false, 00:13:36.914 "abort": true, 00:13:36.914 "nvme_admin": false, 00:13:36.914 "nvme_io": false 00:13:36.914 }, 00:13:36.914 "memory_domains": [ 00:13:36.914 { 00:13:36.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.914 "dma_device_type": 2 00:13:36.914 } 00:13:36.914 ], 00:13:36.914 "driver_specific": {} 00:13:36.914 } 00:13:36.914 ] 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- common/autotest_common.sh@905 -- # return 0 00:13:36.914 21:35:57 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:36.914 21:35:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.914 21:35:57 -- common/autotest_common.sh@10 -- # set +x 00:13:36.914 21:35:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.914 21:35:57 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:36.914 21:35:57 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:37.172 Running I/O for 5 seconds... 00:13:38.106 Process is existed as continue on error is set. Pid: 68008 00:13:38.106 21:35:58 -- bdev/blockdev.sh@485 -- # kill -0 68008 00:13:38.106 21:35:58 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 68008' 00:13:38.106 21:35:58 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:38.106 21:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.106 21:35:58 -- common/autotest_common.sh@10 -- # set +x 00:13:38.106 21:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.106 21:35:58 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:38.106 21:35:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.106 21:35:58 -- common/autotest_common.sh@10 -- # set +x 00:13:38.106 Timeout while waiting for response: 00:13:38.106 00:13:38.106 00:13:38.366 21:35:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.366 21:35:58 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:42.553 00:13:42.553 Latency(us) 00:13:42.553 [2024-12-06T21:36:03.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.553 [2024-12-06T21:36:03.050Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.553 EE_Dev_1 : 0.89 38411.90 150.05 5.64 0.00 413.41 126.60 796.86 00:13:42.553 [2024-12-06T21:36:03.050Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:42.553 Dev_2 : 5.00 76258.14 297.88 0.00 0.00 206.72 57.48 295507.78 00:13:42.553 [2024-12-06T21:36:03.050Z] =================================================================================================================== 00:13:42.553 [2024-12-06T21:36:03.050Z] Total : 114670.03 447.93 5.64 0.00 223.68 57.48 295507.78 00:13:43.489 21:36:03 -- bdev/blockdev.sh@497 -- # killprocess 68008 00:13:43.489 21:36:03 -- common/autotest_common.sh@936 -- # '[' -z 68008 ']' 00:13:43.489 21:36:03 -- common/autotest_common.sh@940 -- # kill -0 68008 00:13:43.489 21:36:03 -- common/autotest_common.sh@941 -- # uname 00:13:43.489 21:36:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:43.489 21:36:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68008 00:13:43.489 killing process with pid 68008 00:13:43.489 Received shutdown signal, test time was about 5.000000 seconds 00:13:43.489 00:13:43.489 Latency(us) 00:13:43.489 [2024-12-06T21:36:03.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.489 [2024-12-06T21:36:03.986Z] =================================================================================================================== 00:13:43.489 [2024-12-06T21:36:03.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:43.489 21:36:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:43.489 21:36:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:43.489 21:36:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68008' 00:13:43.489 21:36:03 -- common/autotest_common.sh@955 -- # kill 68008 00:13:43.489 21:36:03 -- common/autotest_common.sh@960 -- # wait 68008 00:13:44.865 Process error testing pid: 68115 00:13:44.865 21:36:05 -- bdev/blockdev.sh@501 -- # ERR_PID=68115 00:13:44.865 21:36:05 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:44.865 21:36:05 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 68115' 00:13:44.865 21:36:05 -- bdev/blockdev.sh@503 -- # waitforlisten 68115 00:13:44.865 21:36:05 -- common/autotest_common.sh@829 -- # '[' -z 68115 ']' 00:13:44.865 21:36:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.865 21:36:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:44.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.865 21:36:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.865 21:36:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:44.865 21:36:05 -- common/autotest_common.sh@10 -- # set +x 00:13:44.865 [2024-12-06 21:36:05.154661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:44.865 [2024-12-06 21:36:05.155052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68115 ] 00:13:44.865 [2024-12-06 21:36:05.324564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.123 [2024-12-06 21:36:05.497391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.687 21:36:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:45.687 21:36:06 -- common/autotest_common.sh@862 -- # return 0 00:13:45.687 21:36:06 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:45.687 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.687 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.687 Dev_1 00:13:45.687 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.687 21:36:06 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:45.687 21:36:06 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:13:45.687 21:36:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.687 21:36:06 -- common/autotest_common.sh@899 -- # local i 00:13:45.687 21:36:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.687 21:36:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.687 21:36:06 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:45.687 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.687 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 [ 00:13:45.945 { 00:13:45.945 "name": "Dev_1", 00:13:45.945 "aliases": [ 00:13:45.945 "f44a7e8a-e7b9-4383-9e13-459f9ad901c9" 00:13:45.945 ], 00:13:45.945 "product_name": "Malloc disk", 00:13:45.945 "block_size": 512, 00:13:45.945 "num_blocks": 262144, 00:13:45.945 "uuid": "f44a7e8a-e7b9-4383-9e13-459f9ad901c9", 00:13:45.945 "assigned_rate_limits": { 00:13:45.945 "rw_ios_per_sec": 0, 00:13:45.945 "rw_mbytes_per_sec": 0, 00:13:45.945 "r_mbytes_per_sec": 0, 00:13:45.945 "w_mbytes_per_sec": 0 00:13:45.945 }, 00:13:45.945 "claimed": false, 00:13:45.945 "zoned": false, 00:13:45.945 "supported_io_types": { 00:13:45.945 "read": true, 00:13:45.945 "write": true, 00:13:45.945 "unmap": true, 00:13:45.945 "write_zeroes": true, 00:13:45.945 "flush": true, 00:13:45.945 "reset": true, 00:13:45.945 "compare": false, 00:13:45.945 "compare_and_write": false, 00:13:45.945 "abort": true, 00:13:45.945 "nvme_admin": false, 00:13:45.945 "nvme_io": false 00:13:45.945 }, 00:13:45.945 "memory_domains": [ 00:13:45.945 { 00:13:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.945 "dma_device_type": 2 00:13:45.945 } 00:13:45.945 ], 00:13:45.945 "driver_specific": {} 00:13:45.945 } 00:13:45.945 ] 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- common/autotest_common.sh@905 -- # return 0 00:13:45.945 21:36:06 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 true 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 Dev_2 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:45.945 21:36:06 -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:13:45.945 21:36:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:45.945 21:36:06 -- common/autotest_common.sh@899 -- # local i 00:13:45.945 21:36:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:45.945 21:36:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:45.945 21:36:06 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 [ 00:13:45.945 { 00:13:45.945 "name": "Dev_2", 00:13:45.945 "aliases": [ 00:13:45.945 "09a1f646-7b87-460b-8650-5d72d024aa89" 00:13:45.945 ], 00:13:45.945 "product_name": "Malloc disk", 00:13:45.945 "block_size": 512, 00:13:45.945 "num_blocks": 262144, 00:13:45.945 "uuid": "09a1f646-7b87-460b-8650-5d72d024aa89", 00:13:45.945 "assigned_rate_limits": { 00:13:45.945 "rw_ios_per_sec": 0, 00:13:45.945 "rw_mbytes_per_sec": 0, 00:13:45.945 "r_mbytes_per_sec": 0, 00:13:45.945 "w_mbytes_per_sec": 0 00:13:45.945 }, 00:13:45.945 "claimed": false, 00:13:45.945 "zoned": false, 00:13:45.945 "supported_io_types": { 00:13:45.945 "read": true, 00:13:45.945 "write": true, 00:13:45.945 "unmap": true, 00:13:45.945 "write_zeroes": true, 00:13:45.945 "flush": true, 00:13:45.945 "reset": true, 00:13:45.945 "compare": false, 00:13:45.945 "compare_and_write": false, 00:13:45.945 "abort": true, 00:13:45.945 "nvme_admin": false, 00:13:45.945 "nvme_io": false 00:13:45.945 }, 00:13:45.945 "memory_domains": [ 00:13:45.945 { 00:13:45.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.945 "dma_device_type": 2 00:13:45.945 } 00:13:45.945 ], 00:13:45.945 "driver_specific": {} 00:13:45.945 } 00:13:45.945 ] 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- common/autotest_common.sh@905 -- # return 0 00:13:45.945 21:36:06 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:45.945 21:36:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.945 21:36:06 -- common/autotest_common.sh@10 -- # set +x 00:13:45.945 21:36:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.945 21:36:06 -- bdev/blockdev.sh@513 -- # NOT wait 68115 00:13:45.945 21:36:06 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:45.945 21:36:06 -- common/autotest_common.sh@650 -- # local es=0 00:13:45.945 21:36:06 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 68115 00:13:45.945 21:36:06 -- common/autotest_common.sh@638 -- # local arg=wait 00:13:45.945 21:36:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.945 21:36:06 -- common/autotest_common.sh@642 -- # type -t wait 00:13:45.945 21:36:06 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:45.945 21:36:06 -- common/autotest_common.sh@653 -- # wait 68115 00:13:46.203 Running I/O for 5 seconds... 00:13:46.203 task offset: 165032 on job bdev=EE_Dev_1 fails 00:13:46.203 00:13:46.203 Latency(us) 00:13:46.203 [2024-12-06T21:36:06.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:46.203 [2024-12-06T21:36:06.700Z] Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:46.203 [2024-12-06T21:36:06.700Z] Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:46.203 EE_Dev_1 : 0.00 26410.56 103.17 6002.40 0.00 411.20 162.91 741.00 00:13:46.203 [2024-12-06T21:36:06.700Z] Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:46.203 Dev_2 : 0.00 17429.19 68.08 0.00 0.00 660.97 169.43 1199.01 00:13:46.203 [2024-12-06T21:36:06.700Z] =================================================================================================================== 00:13:46.203 [2024-12-06T21:36:06.700Z] Total : 43839.76 171.25 6002.40 0.00 546.67 162.91 1199.01 00:13:46.203 [2024-12-06 21:36:06.511502] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:46.203 request: 00:13:46.203 { 00:13:46.203 "method": "perform_tests", 00:13:46.203 "req_id": 1 00:13:46.203 } 00:13:46.203 Got JSON-RPC error response 00:13:46.203 response: 00:13:46.203 { 00:13:46.203 "code": -32603, 00:13:46.203 "message": "bdevperf failed with error Operation not permitted" 00:13:46.203 } 00:13:48.133 21:36:08 -- common/autotest_common.sh@653 -- # es=255 00:13:48.133 21:36:08 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:48.133 21:36:08 -- common/autotest_common.sh@662 -- # es=127 00:13:48.133 21:36:08 -- common/autotest_common.sh@663 -- # case "$es" in 00:13:48.133 21:36:08 -- common/autotest_common.sh@670 -- # es=1 00:13:48.133 21:36:08 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:48.133 00:13:48.133 real 0m12.121s 00:13:48.133 user 0m12.440s 00:13:48.133 sys 0m0.748s 00:13:48.133 21:36:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:48.133 ************************************ 00:13:48.133 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:13:48.133 END TEST bdev_error 00:13:48.133 ************************************ 00:13:48.133 21:36:08 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:48.133 21:36:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:48.133 21:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.133 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:13:48.133 ************************************ 00:13:48.133 START TEST bdev_stat 00:13:48.133 ************************************ 00:13:48.133 21:36:08 -- common/autotest_common.sh@1114 -- # stat_test_suite '' 00:13:48.133 21:36:08 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:48.133 21:36:08 -- bdev/blockdev.sh@594 -- # STAT_PID=68173 00:13:48.133 Process Bdev IO statistics testing pid: 68173 00:13:48.133 21:36:08 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:48.133 21:36:08 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 68173' 00:13:48.133 21:36:08 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:48.133 21:36:08 -- bdev/blockdev.sh@597 -- # waitforlisten 68173 00:13:48.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.133 21:36:08 -- common/autotest_common.sh@829 -- # '[' -z 68173 ']' 00:13:48.133 21:36:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.133 21:36:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:48.133 21:36:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.133 21:36:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:48.133 21:36:08 -- common/autotest_common.sh@10 -- # set +x 00:13:48.133 [2024-12-06 21:36:08.286546] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:48.133 [2024-12-06 21:36:08.286739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68173 ] 00:13:48.133 [2024-12-06 21:36:08.458720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:48.392 [2024-12-06 21:36:08.731022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.392 [2024-12-06 21:36:08.731038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.960 21:36:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.960 21:36:09 -- common/autotest_common.sh@862 -- # return 0 00:13:48.960 21:36:09 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:48.960 21:36:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.960 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:48.960 Malloc_STAT 00:13:48.960 21:36:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.960 21:36:09 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:48.960 21:36:09 -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:13:48.960 21:36:09 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:48.960 21:36:09 -- common/autotest_common.sh@899 -- # local i 00:13:48.960 21:36:09 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:48.960 21:36:09 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:48.960 21:36:09 -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:13:48.960 21:36:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.960 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:48.960 21:36:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.960 21:36:09 -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:48.960 21:36:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.960 21:36:09 -- common/autotest_common.sh@10 -- # set +x 00:13:48.960 [ 00:13:48.960 { 00:13:48.960 "name": "Malloc_STAT", 00:13:48.960 "aliases": [ 00:13:48.960 "964f6394-bfa3-4cff-a49d-092f8c0d65a4" 00:13:48.960 ], 00:13:48.960 "product_name": "Malloc disk", 00:13:48.960 "block_size": 512, 00:13:48.960 "num_blocks": 262144, 00:13:48.960 "uuid": "964f6394-bfa3-4cff-a49d-092f8c0d65a4", 00:13:48.960 "assigned_rate_limits": { 00:13:48.960 "rw_ios_per_sec": 0, 00:13:48.960 "rw_mbytes_per_sec": 0, 00:13:48.960 "r_mbytes_per_sec": 0, 00:13:48.960 "w_mbytes_per_sec": 0 00:13:48.960 }, 00:13:48.961 "claimed": false, 00:13:48.961 "zoned": false, 00:13:48.961 "supported_io_types": { 00:13:48.961 "read": true, 00:13:48.961 "write": true, 00:13:48.961 "unmap": true, 00:13:48.961 "write_zeroes": true, 00:13:48.961 "flush": true, 00:13:48.961 "reset": true, 00:13:48.961 "compare": false, 00:13:48.961 "compare_and_write": false, 00:13:48.961 "abort": true, 00:13:48.961 "nvme_admin": false, 00:13:48.961 "nvme_io": false 00:13:48.961 }, 00:13:48.961 "memory_domains": [ 00:13:48.961 { 00:13:48.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.961 "dma_device_type": 2 00:13:48.961 } 00:13:48.961 ], 00:13:48.961 "driver_specific": {} 00:13:48.961 } 00:13:48.961 ] 00:13:48.961 21:36:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.961 21:36:09 -- common/autotest_common.sh@905 -- # return 0 00:13:48.961 21:36:09 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:48.961 21:36:09 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:49.219 Running I/O for 10 seconds... 00:13:51.124 21:36:11 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:51.124 21:36:11 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:51.124 21:36:11 -- bdev/blockdev.sh@558 -- # local iostats 00:13:51.124 21:36:11 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:51.124 21:36:11 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:51.124 21:36:11 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:51.124 21:36:11 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:51.124 21:36:11 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:51.124 21:36:11 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:51.124 21:36:11 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:51.124 21:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.124 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:51.124 21:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.124 21:36:11 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:51.124 "tick_rate": 2200000000, 00:13:51.124 "ticks": 1762937828596, 00:13:51.124 "bdevs": [ 00:13:51.124 { 00:13:51.124 "name": "Malloc_STAT", 00:13:51.124 "bytes_read": 823169536, 00:13:51.124 "num_read_ops": 200963, 00:13:51.124 "bytes_written": 0, 00:13:51.124 "num_write_ops": 0, 00:13:51.124 "bytes_unmapped": 0, 00:13:51.124 "num_unmap_ops": 0, 00:13:51.124 "bytes_copied": 0, 00:13:51.124 "num_copy_ops": 0, 00:13:51.124 "read_latency_ticks": 2116718675004, 00:13:51.124 "max_read_latency_ticks": 12771384, 00:13:51.124 "min_read_latency_ticks": 441360, 00:13:51.124 "write_latency_ticks": 0, 00:13:51.124 "max_write_latency_ticks": 0, 00:13:51.124 "min_write_latency_ticks": 0, 00:13:51.124 "unmap_latency_ticks": 0, 00:13:51.124 "max_unmap_latency_ticks": 0, 00:13:51.124 "min_unmap_latency_ticks": 0, 00:13:51.124 "copy_latency_ticks": 0, 00:13:51.124 "max_copy_latency_ticks": 0, 00:13:51.124 "min_copy_latency_ticks": 0, 00:13:51.124 "io_error": {} 00:13:51.124 } 00:13:51.124 ] 00:13:51.124 }' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@567 -- # io_count1=200963 00:13:51.124 21:36:11 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:51.124 21:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.124 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:51.124 21:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.124 21:36:11 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:51.124 "tick_rate": 2200000000, 00:13:51.124 "ticks": 1763008171540, 00:13:51.124 "name": "Malloc_STAT", 00:13:51.124 "channels": [ 00:13:51.124 { 00:13:51.124 "thread_id": 2, 00:13:51.124 "bytes_read": 416284672, 00:13:51.124 "num_read_ops": 101632, 00:13:51.124 "bytes_written": 0, 00:13:51.124 "num_write_ops": 0, 00:13:51.124 "bytes_unmapped": 0, 00:13:51.124 "num_unmap_ops": 0, 00:13:51.124 "bytes_copied": 0, 00:13:51.124 "num_copy_ops": 0, 00:13:51.124 "read_latency_ticks": 1076094312456, 00:13:51.124 "max_read_latency_ticks": 13567768, 00:13:51.124 "min_read_latency_ticks": 7791071, 00:13:51.124 "write_latency_ticks": 0, 00:13:51.124 "max_write_latency_ticks": 0, 00:13:51.124 "min_write_latency_ticks": 0, 00:13:51.124 "unmap_latency_ticks": 0, 00:13:51.124 "max_unmap_latency_ticks": 0, 00:13:51.124 "min_unmap_latency_ticks": 0, 00:13:51.124 "copy_latency_ticks": 0, 00:13:51.124 "max_copy_latency_ticks": 0, 00:13:51.124 "min_copy_latency_ticks": 0 00:13:51.124 }, 00:13:51.124 { 00:13:51.124 "thread_id": 3, 00:13:51.124 "bytes_read": 420478976, 00:13:51.124 "num_read_ops": 102656, 00:13:51.124 "bytes_written": 0, 00:13:51.124 "num_write_ops": 0, 00:13:51.124 "bytes_unmapped": 0, 00:13:51.124 "num_unmap_ops": 0, 00:13:51.124 "bytes_copied": 0, 00:13:51.124 "num_copy_ops": 0, 00:13:51.124 "read_latency_ticks": 1077717024910, 00:13:51.124 "max_read_latency_ticks": 12771384, 00:13:51.124 "min_read_latency_ticks": 7833748, 00:13:51.124 "write_latency_ticks": 0, 00:13:51.124 "max_write_latency_ticks": 0, 00:13:51.124 "min_write_latency_ticks": 0, 00:13:51.124 "unmap_latency_ticks": 0, 00:13:51.124 "max_unmap_latency_ticks": 0, 00:13:51.124 "min_unmap_latency_ticks": 0, 00:13:51.124 "copy_latency_ticks": 0, 00:13:51.124 "max_copy_latency_ticks": 0, 00:13:51.124 "min_copy_latency_ticks": 0 00:13:51.124 } 00:13:51.124 ] 00:13:51.124 }' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=101632 00:13:51.124 21:36:11 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=101632 00:13:51.124 21:36:11 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=102656 00:13:51.124 21:36:11 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=204288 00:13:51.124 21:36:11 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:51.124 21:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.124 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:51.124 21:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.124 21:36:11 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:51.124 "tick_rate": 2200000000, 00:13:51.124 "ticks": 1763110012266, 00:13:51.124 "bdevs": [ 00:13:51.124 { 00:13:51.124 "name": "Malloc_STAT", 00:13:51.124 "bytes_read": 855675392, 00:13:51.124 "num_read_ops": 208899, 00:13:51.124 "bytes_written": 0, 00:13:51.124 "num_write_ops": 0, 00:13:51.124 "bytes_unmapped": 0, 00:13:51.124 "num_unmap_ops": 0, 00:13:51.124 "bytes_copied": 0, 00:13:51.124 "num_copy_ops": 0, 00:13:51.124 "read_latency_ticks": 2205652135136, 00:13:51.124 "max_read_latency_ticks": 14234464, 00:13:51.124 "min_read_latency_ticks": 441360, 00:13:51.124 "write_latency_ticks": 0, 00:13:51.124 "max_write_latency_ticks": 0, 00:13:51.124 "min_write_latency_ticks": 0, 00:13:51.124 "unmap_latency_ticks": 0, 00:13:51.124 "max_unmap_latency_ticks": 0, 00:13:51.124 "min_unmap_latency_ticks": 0, 00:13:51.124 "copy_latency_ticks": 0, 00:13:51.124 "max_copy_latency_ticks": 0, 00:13:51.124 "min_copy_latency_ticks": 0, 00:13:51.124 "io_error": {} 00:13:51.124 } 00:13:51.124 ] 00:13:51.124 }' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@576 -- # io_count2=208899 00:13:51.124 21:36:11 -- bdev/blockdev.sh@581 -- # '[' 204288 -lt 200963 ']' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@581 -- # '[' 204288 -gt 208899 ']' 00:13:51.124 21:36:11 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:51.124 21:36:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.124 21:36:11 -- common/autotest_common.sh@10 -- # set +x 00:13:51.124 00:13:51.124 Latency(us) 00:13:51.124 [2024-12-06T21:36:11.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.124 [2024-12-06T21:36:11.621Z] Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:51.124 Malloc_STAT : 2.00 52945.09 206.82 0.00 0.00 4823.36 1444.77 6494.02 00:13:51.124 [2024-12-06T21:36:11.621Z] Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:51.124 Malloc_STAT : 2.00 53443.79 208.76 0.00 0.00 4778.60 1273.48 5808.87 00:13:51.124 [2024-12-06T21:36:11.622Z] =================================================================================================================== 00:13:51.125 [2024-12-06T21:36:11.622Z] Total : 106388.88 415.58 0.00 0.00 4800.87 1273.48 6494.02 00:13:51.125 0 00:13:51.382 21:36:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.382 21:36:11 -- bdev/blockdev.sh@607 -- # killprocess 68173 00:13:51.382 21:36:11 -- common/autotest_common.sh@936 -- # '[' -z 68173 ']' 00:13:51.382 21:36:11 -- common/autotest_common.sh@940 -- # kill -0 68173 00:13:51.382 21:36:11 -- common/autotest_common.sh@941 -- # uname 00:13:51.382 21:36:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:51.382 21:36:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68173 00:13:51.382 killing process with pid 68173 00:13:51.382 Received shutdown signal, test time was about 2.139344 seconds 00:13:51.382 00:13:51.382 Latency(us) 00:13:51.382 [2024-12-06T21:36:11.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.382 [2024-12-06T21:36:11.879Z] =================================================================================================================== 00:13:51.382 [2024-12-06T21:36:11.879Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:51.382 21:36:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:51.382 21:36:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:51.382 21:36:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68173' 00:13:51.382 21:36:11 -- common/autotest_common.sh@955 -- # kill 68173 00:13:51.382 21:36:11 -- common/autotest_common.sh@960 -- # wait 68173 00:13:52.755 ************************************ 00:13:52.755 END TEST bdev_stat 00:13:52.755 ************************************ 00:13:52.755 21:36:12 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:52.755 00:13:52.755 real 0m4.742s 00:13:52.755 user 0m8.830s 00:13:52.755 sys 0m0.389s 00:13:52.755 21:36:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:52.755 21:36:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.755 21:36:13 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:52.755 21:36:13 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:52.755 21:36:13 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:52.755 21:36:13 -- bdev/blockdev.sh@809 -- # cleanup 00:13:52.755 21:36:13 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:52.755 21:36:13 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:52.755 21:36:13 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:52.755 21:36:13 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:52.755 21:36:13 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:52.755 21:36:13 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:52.755 00:13:52.755 real 2m22.219s 00:13:52.755 user 5m50.805s 00:13:52.755 sys 0m22.725s 00:13:52.755 21:36:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:52.755 ************************************ 00:13:52.755 END TEST blockdev_general 00:13:52.755 ************************************ 00:13:52.755 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:52.755 21:36:13 -- spdk/autotest.sh@183 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:52.755 21:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.755 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:52.755 ************************************ 00:13:52.755 START TEST bdev_raid 00:13:52.755 ************************************ 00:13:52.755 21:36:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:52.755 * Looking for test storage... 00:13:52.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:52.755 21:36:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:52.755 21:36:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:52.755 21:36:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:52.755 21:36:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:52.755 21:36:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:52.755 21:36:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:52.755 21:36:13 -- scripts/common.sh@335 -- # IFS=.-: 00:13:52.755 21:36:13 -- scripts/common.sh@335 -- # read -ra ver1 00:13:52.755 21:36:13 -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.755 21:36:13 -- scripts/common.sh@336 -- # read -ra ver2 00:13:52.755 21:36:13 -- scripts/common.sh@337 -- # local 'op=<' 00:13:52.755 21:36:13 -- scripts/common.sh@339 -- # ver1_l=2 00:13:52.755 21:36:13 -- scripts/common.sh@340 -- # ver2_l=1 00:13:52.755 21:36:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:52.755 21:36:13 -- scripts/common.sh@343 -- # case "$op" in 00:13:52.755 21:36:13 -- scripts/common.sh@344 -- # : 1 00:13:52.755 21:36:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:52.755 21:36:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.755 21:36:13 -- scripts/common.sh@364 -- # decimal 1 00:13:52.755 21:36:13 -- scripts/common.sh@352 -- # local d=1 00:13:52.755 21:36:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.755 21:36:13 -- scripts/common.sh@354 -- # echo 1 00:13:52.755 21:36:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:52.755 21:36:13 -- scripts/common.sh@365 -- # decimal 2 00:13:52.755 21:36:13 -- scripts/common.sh@352 -- # local d=2 00:13:52.755 21:36:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.755 21:36:13 -- scripts/common.sh@354 -- # echo 2 00:13:52.755 21:36:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:52.755 21:36:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:52.755 21:36:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:52.755 21:36:13 -- scripts/common.sh@367 -- # return 0 00:13:52.755 21:36:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:52.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.755 --rc genhtml_branch_coverage=1 00:13:52.755 --rc genhtml_function_coverage=1 00:13:52.755 --rc genhtml_legend=1 00:13:52.755 --rc geninfo_all_blocks=1 00:13:52.755 --rc geninfo_unexecuted_blocks=1 00:13:52.755 00:13:52.755 ' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:52.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.755 --rc genhtml_branch_coverage=1 00:13:52.755 --rc genhtml_function_coverage=1 00:13:52.755 --rc genhtml_legend=1 00:13:52.755 --rc geninfo_all_blocks=1 00:13:52.755 --rc geninfo_unexecuted_blocks=1 00:13:52.755 00:13:52.755 ' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:52.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.755 --rc genhtml_branch_coverage=1 00:13:52.755 --rc genhtml_function_coverage=1 00:13:52.755 --rc genhtml_legend=1 00:13:52.755 --rc geninfo_all_blocks=1 00:13:52.755 --rc geninfo_unexecuted_blocks=1 00:13:52.755 00:13:52.755 ' 00:13:52.755 21:36:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:52.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.755 --rc genhtml_branch_coverage=1 00:13:52.755 --rc genhtml_function_coverage=1 00:13:52.755 --rc genhtml_legend=1 00:13:52.755 --rc geninfo_all_blocks=1 00:13:52.755 --rc geninfo_unexecuted_blocks=1 00:13:52.755 00:13:52.755 ' 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:52.755 21:36:13 -- bdev/nbd_common.sh@6 -- # set -e 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:52.755 21:36:13 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:53.014 21:36:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:53.014 21:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:53.014 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.014 ************************************ 00:13:53.014 START TEST raid_function_test_raid0 00:13:53.014 ************************************ 00:13:53.014 21:36:13 -- common/autotest_common.sh@1114 -- # raid_function_test raid0 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@86 -- # raid_pid=68322 00:13:53.014 Process raid pid: 68322 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68322' 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:53.014 21:36:13 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68322 /var/tmp/spdk-raid.sock 00:13:53.014 21:36:13 -- common/autotest_common.sh@829 -- # '[' -z 68322 ']' 00:13:53.014 21:36:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:53.014 21:36:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:53.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:53.014 21:36:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:53.014 21:36:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:53.014 21:36:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.014 [2024-12-06 21:36:13.323483] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:53.014 [2024-12-06 21:36:13.323652] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.014 [2024-12-06 21:36:13.497902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.273 [2024-12-06 21:36:13.675961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.531 [2024-12-06 21:36:13.852171] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.789 21:36:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.789 21:36:14 -- common/autotest_common.sh@862 -- # return 0 00:13:53.789 21:36:14 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:53.789 21:36:14 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:53.789 21:36:14 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:53.789 21:36:14 -- bdev/bdev_raid.sh@70 -- # cat 00:13:53.789 21:36:14 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:54.357 [2024-12-06 21:36:14.606503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:54.357 [2024-12-06 21:36:14.608717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:54.357 [2024-12-06 21:36:14.608825] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:54.357 [2024-12-06 21:36:14.608869] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:54.357 [2024-12-06 21:36:14.609055] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:54.357 [2024-12-06 21:36:14.609434] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:54.357 [2024-12-06 21:36:14.609473] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:54.357 [2024-12-06 21:36:14.609647] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:54.357 Base_1 00:13:54.357 Base_2 00:13:54.357 21:36:14 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:54.357 21:36:14 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:54.357 21:36:14 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:54.616 21:36:14 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:54.616 21:36:14 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:54.616 21:36:14 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@12 -- # local i 00:13:54.616 21:36:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.617 21:36:14 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.617 21:36:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:54.880 [2024-12-06 21:36:15.134626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:54.880 /dev/nbd0 00:13:54.880 21:36:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.880 21:36:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.880 21:36:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:54.880 21:36:15 -- common/autotest_common.sh@867 -- # local i 00:13:54.880 21:36:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:54.880 21:36:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:54.880 21:36:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:54.880 21:36:15 -- common/autotest_common.sh@871 -- # break 00:13:54.880 21:36:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:54.880 21:36:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:54.880 21:36:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.880 1+0 records in 00:13:54.881 1+0 records out 00:13:54.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243598 s, 16.8 MB/s 00:13:54.881 21:36:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.881 21:36:15 -- common/autotest_common.sh@884 -- # size=4096 00:13:54.881 21:36:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.881 21:36:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:54.881 21:36:15 -- common/autotest_common.sh@887 -- # return 0 00:13:54.881 21:36:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.881 21:36:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.881 21:36:15 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:54.881 21:36:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:54.881 21:36:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:55.139 { 00:13:55.139 "nbd_device": "/dev/nbd0", 00:13:55.139 "bdev_name": "raid" 00:13:55.139 } 00:13:55.139 ]' 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:55.139 { 00:13:55.139 "nbd_device": "/dev/nbd0", 00:13:55.139 "bdev_name": "raid" 00:13:55.139 } 00:13:55.139 ]' 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@65 -- # count=1 00:13:55.139 21:36:15 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:55.139 4096+0 records in 00:13:55.139 4096+0 records out 00:13:55.139 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0205822 s, 102 MB/s 00:13:55.139 21:36:15 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:55.397 4096+0 records in 00:13:55.397 4096+0 records out 00:13:55.397 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.309574 s, 6.8 MB/s 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:55.397 128+0 records in 00:13:55.397 128+0 records out 00:13:55.397 65536 bytes (66 kB, 64 KiB) copied, 0.00065506 s, 100 MB/s 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:55.397 21:36:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:55.398 2035+0 records in 00:13:55.398 2035+0 records out 00:13:55.398 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00546465 s, 191 MB/s 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:55.398 456+0 records in 00:13:55.398 456+0 records out 00:13:55.398 233472 bytes (233 kB, 228 KiB) copied, 0.00129439 s, 180 MB/s 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:55.398 21:36:15 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@51 -- # local i 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.398 21:36:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:55.656 [2024-12-06 21:36:16.109654] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@41 -- # break 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.656 21:36:16 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.656 21:36:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@65 -- # true 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@65 -- # count=0 00:13:55.915 21:36:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:55.915 21:36:16 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:55.915 21:36:16 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:55.915 21:36:16 -- bdev/bdev_raid.sh@111 -- # killprocess 68322 00:13:55.915 21:36:16 -- common/autotest_common.sh@936 -- # '[' -z 68322 ']' 00:13:55.915 21:36:16 -- common/autotest_common.sh@940 -- # kill -0 68322 00:13:55.915 21:36:16 -- common/autotest_common.sh@941 -- # uname 00:13:55.915 21:36:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.915 21:36:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68322 00:13:56.175 21:36:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.175 killing process with pid 68322 00:13:56.175 21:36:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.175 21:36:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68322' 00:13:56.175 21:36:16 -- common/autotest_common.sh@955 -- # kill 68322 00:13:56.175 [2024-12-06 21:36:16.425896] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.175 21:36:16 -- common/autotest_common.sh@960 -- # wait 68322 00:13:56.175 [2024-12-06 21:36:16.426016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.175 [2024-12-06 21:36:16.426082] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.175 [2024-12-06 21:36:16.426101] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:13:56.175 [2024-12-06 21:36:16.596628] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:57.550 00:13:57.550 real 0m4.489s 00:13:57.550 user 0m5.721s 00:13:57.550 sys 0m0.940s 00:13:57.550 21:36:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:57.550 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.550 ************************************ 00:13:57.550 END TEST raid_function_test_raid0 00:13:57.550 ************************************ 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:57.550 21:36:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:57.550 21:36:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.550 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.550 ************************************ 00:13:57.550 START TEST raid_function_test_concat 00:13:57.550 ************************************ 00:13:57.550 21:36:17 -- common/autotest_common.sh@1114 -- # raid_function_test concat 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@86 -- # raid_pid=68466 00:13:57.550 Process raid pid: 68466 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 68466' 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@88 -- # waitforlisten 68466 /var/tmp/spdk-raid.sock 00:13:57.550 21:36:17 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:57.550 21:36:17 -- common/autotest_common.sh@829 -- # '[' -z 68466 ']' 00:13:57.550 21:36:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:57.550 21:36:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.550 21:36:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:57.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:57.550 21:36:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.550 21:36:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.550 [2024-12-06 21:36:17.857328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:57.550 [2024-12-06 21:36:17.857498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.550 [2024-12-06 21:36:18.013039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.807 [2024-12-06 21:36:18.190244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.065 [2024-12-06 21:36:18.364898] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.322 21:36:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:58.322 21:36:18 -- common/autotest_common.sh@862 -- # return 0 00:13:58.322 21:36:18 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:58.322 21:36:18 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:58.322 21:36:18 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:58.322 21:36:18 -- bdev/bdev_raid.sh@70 -- # cat 00:13:58.322 21:36:18 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:58.590 [2024-12-06 21:36:19.060405] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:58.590 [2024-12-06 21:36:19.062395] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:58.590 [2024-12-06 21:36:19.062504] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:13:58.590 [2024-12-06 21:36:19.062525] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:58.590 [2024-12-06 21:36:19.062649] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:13:58.590 [2024-12-06 21:36:19.063040] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:13:58.590 [2024-12-06 21:36:19.063067] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x516000006f80 00:13:58.590 [2024-12-06 21:36:19.063232] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.590 Base_1 00:13:58.590 Base_2 00:13:58.590 21:36:19 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:58.849 21:36:19 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:58.849 21:36:19 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:58.849 21:36:19 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:58.849 21:36:19 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:58.849 21:36:19 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@12 -- # local i 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:58.849 21:36:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:59.107 [2024-12-06 21:36:19.516641] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:13:59.107 /dev/nbd0 00:13:59.107 21:36:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.107 21:36:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.107 21:36:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:13:59.107 21:36:19 -- common/autotest_common.sh@867 -- # local i 00:13:59.107 21:36:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:13:59.107 21:36:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:13:59.107 21:36:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:13:59.107 21:36:19 -- common/autotest_common.sh@871 -- # break 00:13:59.107 21:36:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:13:59.107 21:36:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:13:59.107 21:36:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.107 1+0 records in 00:13:59.107 1+0 records out 00:13:59.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252231 s, 16.2 MB/s 00:13:59.107 21:36:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.107 21:36:19 -- common/autotest_common.sh@884 -- # size=4096 00:13:59.107 21:36:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.107 21:36:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:13:59.107 21:36:19 -- common/autotest_common.sh@887 -- # return 0 00:13:59.108 21:36:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.108 21:36:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.108 21:36:19 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:59.108 21:36:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:59.108 21:36:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:59.366 { 00:13:59.366 "nbd_device": "/dev/nbd0", 00:13:59.366 "bdev_name": "raid" 00:13:59.366 } 00:13:59.366 ]' 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:59.366 { 00:13:59.366 "nbd_device": "/dev/nbd0", 00:13:59.366 "bdev_name": "raid" 00:13:59.366 } 00:13:59.366 ]' 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@65 -- # count=1 00:13:59.366 21:36:19 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:59.366 21:36:19 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:59.367 4096+0 records in 00:13:59.367 4096+0 records out 00:13:59.367 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0198767 s, 106 MB/s 00:13:59.367 21:36:19 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:59.934 4096+0 records in 00:13:59.934 4096+0 records out 00:13:59.934 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.318674 s, 6.6 MB/s 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:59.934 128+0 records in 00:13:59.934 128+0 records out 00:13:59.934 65536 bytes (66 kB, 64 KiB) copied, 0.000580837 s, 113 MB/s 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:59.934 2035+0 records in 00:13:59.934 2035+0 records out 00:13:59.934 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00576516 s, 181 MB/s 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:59.934 21:36:20 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:59.935 456+0 records in 00:13:59.935 456+0 records out 00:13:59.935 233472 bytes (233 kB, 228 KiB) copied, 0.00101779 s, 229 MB/s 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:59.935 21:36:20 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@51 -- # local i 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.935 21:36:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:00.194 [2024-12-06 21:36:20.496387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@41 -- # break 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.194 21:36:20 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:00.194 21:36:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@65 -- # true 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@65 -- # count=0 00:14:00.453 21:36:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:00.453 21:36:20 -- bdev/bdev_raid.sh@106 -- # count=0 00:14:00.453 21:36:20 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:14:00.453 21:36:20 -- bdev/bdev_raid.sh@111 -- # killprocess 68466 00:14:00.453 21:36:20 -- common/autotest_common.sh@936 -- # '[' -z 68466 ']' 00:14:00.453 21:36:20 -- common/autotest_common.sh@940 -- # kill -0 68466 00:14:00.453 21:36:20 -- common/autotest_common.sh@941 -- # uname 00:14:00.453 21:36:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:00.453 21:36:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68466 00:14:00.453 21:36:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:00.453 21:36:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:00.453 killing process with pid 68466 00:14:00.453 21:36:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68466' 00:14:00.453 21:36:20 -- common/autotest_common.sh@955 -- # kill 68466 00:14:00.453 [2024-12-06 21:36:20.773910] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.453 21:36:20 -- common/autotest_common.sh@960 -- # wait 68466 00:14:00.453 [2024-12-06 21:36:20.774016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:00.453 [2024-12-06 21:36:20.774078] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:00.453 [2024-12-06 21:36:20.774096] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name raid, state offline 00:14:00.454 [2024-12-06 21:36:20.920801] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@113 -- # return 0 00:14:01.852 00:14:01.852 real 0m4.230s 00:14:01.852 user 0m5.318s 00:14:01.852 sys 0m0.876s 00:14:01.852 21:36:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:01.852 21:36:22 -- common/autotest_common.sh@10 -- # set +x 00:14:01.852 ************************************ 00:14:01.852 END TEST raid_function_test_concat 00:14:01.852 ************************************ 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:14:01.852 21:36:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:01.852 21:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:01.852 21:36:22 -- common/autotest_common.sh@10 -- # set +x 00:14:01.852 ************************************ 00:14:01.852 START TEST raid0_resize_test 00:14:01.852 ************************************ 00:14:01.852 21:36:22 -- common/autotest_common.sh@1114 -- # raid0_resize_test 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@301 -- # raid_pid=68612 00:14:01.852 Process raid pid: 68612 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 68612' 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:01.852 21:36:22 -- bdev/bdev_raid.sh@303 -- # waitforlisten 68612 /var/tmp/spdk-raid.sock 00:14:01.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:01.852 21:36:22 -- common/autotest_common.sh@829 -- # '[' -z 68612 ']' 00:14:01.852 21:36:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:01.852 21:36:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:01.852 21:36:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:01.852 21:36:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:01.852 21:36:22 -- common/autotest_common.sh@10 -- # set +x 00:14:01.852 [2024-12-06 21:36:22.143776] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:01.852 [2024-12-06 21:36:22.143975] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.852 [2024-12-06 21:36:22.303439] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.147 [2024-12-06 21:36:22.491842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.406 [2024-12-06 21:36:22.673338] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.664 21:36:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:02.664 21:36:23 -- common/autotest_common.sh@862 -- # return 0 00:14:02.664 21:36:23 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:14:02.922 Base_1 00:14:02.922 21:36:23 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:14:03.181 Base_2 00:14:03.181 21:36:23 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:14:03.440 [2024-12-06 21:36:23.755408] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:14:03.440 [2024-12-06 21:36:23.757709] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:14:03.440 [2024-12-06 21:36:23.757808] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:03.440 [2024-12-06 21:36:23.757825] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:03.440 [2024-12-06 21:36:23.757969] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005450 00:14:03.440 [2024-12-06 21:36:23.758289] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:03.440 [2024-12-06 21:36:23.758305] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x516000006f80 00:14:03.440 [2024-12-06 21:36:23.758506] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.440 21:36:23 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:14:03.699 [2024-12-06 21:36:23.971479] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:03.699 [2024-12-06 21:36:23.971522] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:14:03.699 true 00:14:03.699 21:36:23 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:03.699 21:36:23 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:14:03.959 [2024-12-06 21:36:24.227674] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.959 21:36:24 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:14:03.959 21:36:24 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:14:03.959 21:36:24 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:14:03.959 21:36:24 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:14:04.218 [2024-12-06 21:36:24.479701] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:14:04.218 [2024-12-06 21:36:24.479743] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:14:04.218 [2024-12-06 21:36:24.479785] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:14:04.218 [2024-12-06 21:36:24.479814] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:04.218 true 00:14:04.218 21:36:24 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:14:04.218 21:36:24 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:14:04.478 [2024-12-06 21:36:24.739951] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:04.478 21:36:24 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:14:04.478 21:36:24 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:14:04.478 21:36:24 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:14:04.478 21:36:24 -- bdev/bdev_raid.sh@332 -- # killprocess 68612 00:14:04.478 21:36:24 -- common/autotest_common.sh@936 -- # '[' -z 68612 ']' 00:14:04.478 21:36:24 -- common/autotest_common.sh@940 -- # kill -0 68612 00:14:04.478 21:36:24 -- common/autotest_common.sh@941 -- # uname 00:14:04.478 21:36:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:04.478 21:36:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68612 00:14:04.478 killing process with pid 68612 00:14:04.478 21:36:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:04.478 21:36:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:04.478 21:36:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68612' 00:14:04.478 21:36:24 -- common/autotest_common.sh@955 -- # kill 68612 00:14:04.478 [2024-12-06 21:36:24.788118] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:04.478 21:36:24 -- common/autotest_common.sh@960 -- # wait 68612 00:14:04.478 [2024-12-06 21:36:24.788203] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.478 [2024-12-06 21:36:24.788273] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.478 [2024-12-06 21:36:24.788307] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Raid, state offline 00:14:04.478 [2024-12-06 21:36:24.788988] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@334 -- # return 0 00:14:05.854 00:14:05.854 real 0m3.829s 00:14:05.854 user 0m5.427s 00:14:05.854 sys 0m0.467s 00:14:05.854 21:36:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:05.854 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 ************************************ 00:14:05.854 END TEST raid0_resize_test 00:14:05.854 ************************************ 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:14:05.854 21:36:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:05.854 21:36:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:05.854 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 ************************************ 00:14:05.854 START TEST raid_state_function_test 00:14:05.854 ************************************ 00:14:05.854 21:36:25 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 false 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@226 -- # raid_pid=68690 00:14:05.854 Process raid pid: 68690 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 68690' 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@228 -- # waitforlisten 68690 /var/tmp/spdk-raid.sock 00:14:05.854 21:36:25 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:05.854 21:36:25 -- common/autotest_common.sh@829 -- # '[' -z 68690 ']' 00:14:05.854 21:36:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:05.854 21:36:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:05.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:05.854 21:36:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:05.854 21:36:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:05.854 21:36:25 -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 [2024-12-06 21:36:26.035615] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:05.854 [2024-12-06 21:36:26.035807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.854 [2024-12-06 21:36:26.202112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.112 [2024-12-06 21:36:26.385818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.112 [2024-12-06 21:36:26.561897] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:06.680 21:36:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:06.680 21:36:26 -- common/autotest_common.sh@862 -- # return 0 00:14:06.680 21:36:26 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:06.939 [2024-12-06 21:36:27.180157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:06.939 [2024-12-06 21:36:27.180220] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:06.939 [2024-12-06 21:36:27.180237] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:06.939 [2024-12-06 21:36:27.180252] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.939 "name": "Existed_Raid", 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "strip_size_kb": 64, 00:14:06.939 "state": "configuring", 00:14:06.939 "raid_level": "raid0", 00:14:06.939 "superblock": false, 00:14:06.939 "num_base_bdevs": 2, 00:14:06.939 "num_base_bdevs_discovered": 0, 00:14:06.939 "num_base_bdevs_operational": 2, 00:14:06.939 "base_bdevs_list": [ 00:14:06.939 { 00:14:06.939 "name": "BaseBdev1", 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "is_configured": false, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 0 00:14:06.939 }, 00:14:06.939 { 00:14:06.939 "name": "BaseBdev2", 00:14:06.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.939 "is_configured": false, 00:14:06.939 "data_offset": 0, 00:14:06.939 "data_size": 0 00:14:06.939 } 00:14:06.939 ] 00:14:06.939 }' 00:14:06.939 21:36:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.939 21:36:27 -- common/autotest_common.sh@10 -- # set +x 00:14:07.508 21:36:27 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:07.508 [2024-12-06 21:36:27.960284] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:07.508 [2024-12-06 21:36:27.960400] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:07.508 21:36:27 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:07.767 [2024-12-06 21:36:28.188437] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.767 [2024-12-06 21:36:28.188522] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.767 [2024-12-06 21:36:28.188544] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.767 [2024-12-06 21:36:28.188561] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.767 21:36:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.027 [2024-12-06 21:36:28.471052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:08.027 BaseBdev1 00:14:08.027 21:36:28 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:08.027 21:36:28 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:08.027 21:36:28 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:08.027 21:36:28 -- common/autotest_common.sh@899 -- # local i 00:14:08.027 21:36:28 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:08.027 21:36:28 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:08.027 21:36:28 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:08.286 21:36:28 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:08.546 [ 00:14:08.546 { 00:14:08.546 "name": "BaseBdev1", 00:14:08.546 "aliases": [ 00:14:08.546 "335a6bec-91b5-4a9c-93b7-c0796ec6aa44" 00:14:08.546 ], 00:14:08.546 "product_name": "Malloc disk", 00:14:08.546 "block_size": 512, 00:14:08.546 "num_blocks": 65536, 00:14:08.546 "uuid": "335a6bec-91b5-4a9c-93b7-c0796ec6aa44", 00:14:08.546 "assigned_rate_limits": { 00:14:08.546 "rw_ios_per_sec": 0, 00:14:08.546 "rw_mbytes_per_sec": 0, 00:14:08.546 "r_mbytes_per_sec": 0, 00:14:08.546 "w_mbytes_per_sec": 0 00:14:08.546 }, 00:14:08.546 "claimed": true, 00:14:08.546 "claim_type": "exclusive_write", 00:14:08.546 "zoned": false, 00:14:08.546 "supported_io_types": { 00:14:08.546 "read": true, 00:14:08.546 "write": true, 00:14:08.546 "unmap": true, 00:14:08.546 "write_zeroes": true, 00:14:08.546 "flush": true, 00:14:08.546 "reset": true, 00:14:08.546 "compare": false, 00:14:08.546 "compare_and_write": false, 00:14:08.546 "abort": true, 00:14:08.546 "nvme_admin": false, 00:14:08.546 "nvme_io": false 00:14:08.546 }, 00:14:08.546 "memory_domains": [ 00:14:08.546 { 00:14:08.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:08.546 "dma_device_type": 2 00:14:08.546 } 00:14:08.546 ], 00:14:08.546 "driver_specific": {} 00:14:08.546 } 00:14:08.546 ] 00:14:08.546 21:36:28 -- common/autotest_common.sh@905 -- # return 0 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.546 21:36:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.805 21:36:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:08.805 "name": "Existed_Raid", 00:14:08.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.805 "strip_size_kb": 64, 00:14:08.805 "state": "configuring", 00:14:08.805 "raid_level": "raid0", 00:14:08.805 "superblock": false, 00:14:08.805 "num_base_bdevs": 2, 00:14:08.806 "num_base_bdevs_discovered": 1, 00:14:08.806 "num_base_bdevs_operational": 2, 00:14:08.806 "base_bdevs_list": [ 00:14:08.806 { 00:14:08.806 "name": "BaseBdev1", 00:14:08.806 "uuid": "335a6bec-91b5-4a9c-93b7-c0796ec6aa44", 00:14:08.806 "is_configured": true, 00:14:08.806 "data_offset": 0, 00:14:08.806 "data_size": 65536 00:14:08.806 }, 00:14:08.806 { 00:14:08.806 "name": "BaseBdev2", 00:14:08.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.806 "is_configured": false, 00:14:08.806 "data_offset": 0, 00:14:08.806 "data_size": 0 00:14:08.806 } 00:14:08.806 ] 00:14:08.806 }' 00:14:08.806 21:36:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:08.806 21:36:29 -- common/autotest_common.sh@10 -- # set +x 00:14:09.065 21:36:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.325 [2024-12-06 21:36:29.663590] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.325 [2024-12-06 21:36:29.663653] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:09.325 21:36:29 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:09.325 21:36:29 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:09.585 [2024-12-06 21:36:29.883733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.585 [2024-12-06 21:36:29.885770] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.585 [2024-12-06 21:36:29.885839] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.585 21:36:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.844 21:36:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.844 "name": "Existed_Raid", 00:14:09.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.844 "strip_size_kb": 64, 00:14:09.844 "state": "configuring", 00:14:09.844 "raid_level": "raid0", 00:14:09.844 "superblock": false, 00:14:09.844 "num_base_bdevs": 2, 00:14:09.844 "num_base_bdevs_discovered": 1, 00:14:09.844 "num_base_bdevs_operational": 2, 00:14:09.844 "base_bdevs_list": [ 00:14:09.844 { 00:14:09.844 "name": "BaseBdev1", 00:14:09.844 "uuid": "335a6bec-91b5-4a9c-93b7-c0796ec6aa44", 00:14:09.844 "is_configured": true, 00:14:09.844 "data_offset": 0, 00:14:09.844 "data_size": 65536 00:14:09.844 }, 00:14:09.844 { 00:14:09.844 "name": "BaseBdev2", 00:14:09.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.844 "is_configured": false, 00:14:09.844 "data_offset": 0, 00:14:09.844 "data_size": 0 00:14:09.844 } 00:14:09.844 ] 00:14:09.844 }' 00:14:09.844 21:36:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.844 21:36:30 -- common/autotest_common.sh@10 -- # set +x 00:14:10.104 21:36:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.363 [2024-12-06 21:36:30.639235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.363 [2024-12-06 21:36:30.639287] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:10.363 [2024-12-06 21:36:30.639301] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:10.364 [2024-12-06 21:36:30.639414] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:10.364 [2024-12-06 21:36:30.639855] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:10.364 [2024-12-06 21:36:30.639878] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:10.364 [2024-12-06 21:36:30.640166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.364 BaseBdev2 00:14:10.364 21:36:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:10.364 21:36:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:10.364 21:36:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:10.364 21:36:30 -- common/autotest_common.sh@899 -- # local i 00:14:10.364 21:36:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:10.364 21:36:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:10.364 21:36:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.623 21:36:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.623 [ 00:14:10.623 { 00:14:10.623 "name": "BaseBdev2", 00:14:10.623 "aliases": [ 00:14:10.623 "0f656b86-b24e-4915-ac24-2e15ea1802d1" 00:14:10.623 ], 00:14:10.623 "product_name": "Malloc disk", 00:14:10.623 "block_size": 512, 00:14:10.623 "num_blocks": 65536, 00:14:10.623 "uuid": "0f656b86-b24e-4915-ac24-2e15ea1802d1", 00:14:10.623 "assigned_rate_limits": { 00:14:10.623 "rw_ios_per_sec": 0, 00:14:10.623 "rw_mbytes_per_sec": 0, 00:14:10.623 "r_mbytes_per_sec": 0, 00:14:10.623 "w_mbytes_per_sec": 0 00:14:10.623 }, 00:14:10.623 "claimed": true, 00:14:10.623 "claim_type": "exclusive_write", 00:14:10.623 "zoned": false, 00:14:10.623 "supported_io_types": { 00:14:10.623 "read": true, 00:14:10.623 "write": true, 00:14:10.623 "unmap": true, 00:14:10.623 "write_zeroes": true, 00:14:10.623 "flush": true, 00:14:10.623 "reset": true, 00:14:10.623 "compare": false, 00:14:10.623 "compare_and_write": false, 00:14:10.623 "abort": true, 00:14:10.623 "nvme_admin": false, 00:14:10.623 "nvme_io": false 00:14:10.623 }, 00:14:10.623 "memory_domains": [ 00:14:10.623 { 00:14:10.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.623 "dma_device_type": 2 00:14:10.623 } 00:14:10.623 ], 00:14:10.623 "driver_specific": {} 00:14:10.623 } 00:14:10.623 ] 00:14:10.623 21:36:31 -- common/autotest_common.sh@905 -- # return 0 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.623 21:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.881 21:36:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.881 "name": "Existed_Raid", 00:14:10.881 "uuid": "f0362469-364c-4d2a-ac16-d678ffafe620", 00:14:10.881 "strip_size_kb": 64, 00:14:10.881 "state": "online", 00:14:10.881 "raid_level": "raid0", 00:14:10.881 "superblock": false, 00:14:10.881 "num_base_bdevs": 2, 00:14:10.881 "num_base_bdevs_discovered": 2, 00:14:10.881 "num_base_bdevs_operational": 2, 00:14:10.881 "base_bdevs_list": [ 00:14:10.881 { 00:14:10.881 "name": "BaseBdev1", 00:14:10.881 "uuid": "335a6bec-91b5-4a9c-93b7-c0796ec6aa44", 00:14:10.881 "is_configured": true, 00:14:10.881 "data_offset": 0, 00:14:10.881 "data_size": 65536 00:14:10.881 }, 00:14:10.881 { 00:14:10.881 "name": "BaseBdev2", 00:14:10.881 "uuid": "0f656b86-b24e-4915-ac24-2e15ea1802d1", 00:14:10.881 "is_configured": true, 00:14:10.881 "data_offset": 0, 00:14:10.881 "data_size": 65536 00:14:10.881 } 00:14:10.881 ] 00:14:10.881 }' 00:14:10.881 21:36:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.881 21:36:31 -- common/autotest_common.sh@10 -- # set +x 00:14:11.140 21:36:31 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:11.399 [2024-12-06 21:36:31.823783] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.399 [2024-12-06 21:36:31.823825] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:11.399 [2024-12-06 21:36:31.823961] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.658 21:36:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.917 21:36:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:11.917 "name": "Existed_Raid", 00:14:11.917 "uuid": "f0362469-364c-4d2a-ac16-d678ffafe620", 00:14:11.917 "strip_size_kb": 64, 00:14:11.917 "state": "offline", 00:14:11.917 "raid_level": "raid0", 00:14:11.917 "superblock": false, 00:14:11.917 "num_base_bdevs": 2, 00:14:11.917 "num_base_bdevs_discovered": 1, 00:14:11.917 "num_base_bdevs_operational": 1, 00:14:11.917 "base_bdevs_list": [ 00:14:11.917 { 00:14:11.917 "name": null, 00:14:11.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.917 "is_configured": false, 00:14:11.917 "data_offset": 0, 00:14:11.917 "data_size": 65536 00:14:11.917 }, 00:14:11.917 { 00:14:11.917 "name": "BaseBdev2", 00:14:11.917 "uuid": "0f656b86-b24e-4915-ac24-2e15ea1802d1", 00:14:11.917 "is_configured": true, 00:14:11.917 "data_offset": 0, 00:14:11.917 "data_size": 65536 00:14:11.917 } 00:14:11.917 ] 00:14:11.917 }' 00:14:11.917 21:36:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:11.917 21:36:32 -- common/autotest_common.sh@10 -- # set +x 00:14:12.176 21:36:32 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:12.176 21:36:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:12.176 21:36:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.176 21:36:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:12.435 21:36:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:12.435 21:36:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.435 21:36:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:12.694 [2024-12-06 21:36:32.971532] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.694 [2024-12-06 21:36:32.971604] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:12.694 21:36:33 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:12.694 21:36:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:12.694 21:36:33 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.694 21:36:33 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.953 21:36:33 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:12.953 21:36:33 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:12.953 21:36:33 -- bdev/bdev_raid.sh@287 -- # killprocess 68690 00:14:12.953 21:36:33 -- common/autotest_common.sh@936 -- # '[' -z 68690 ']' 00:14:12.953 21:36:33 -- common/autotest_common.sh@940 -- # kill -0 68690 00:14:12.953 21:36:33 -- common/autotest_common.sh@941 -- # uname 00:14:12.953 21:36:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:12.953 21:36:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68690 00:14:12.953 21:36:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:12.953 killing process with pid 68690 00:14:12.953 21:36:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:12.953 21:36:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68690' 00:14:12.953 21:36:33 -- common/autotest_common.sh@955 -- # kill 68690 00:14:12.953 [2024-12-06 21:36:33.322525] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:12.953 21:36:33 -- common/autotest_common.sh@960 -- # wait 68690 00:14:12.953 [2024-12-06 21:36:33.322660] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:14.330 21:36:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:14.330 00:14:14.330 real 0m8.462s 00:14:14.330 user 0m13.794s 00:14:14.330 sys 0m1.196s 00:14:14.330 21:36:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:14.330 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:14:14.330 ************************************ 00:14:14.330 END TEST raid_state_function_test 00:14:14.330 ************************************ 00:14:14.330 21:36:34 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:14:14.331 21:36:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:14.331 21:36:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:14.331 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 ************************************ 00:14:14.331 START TEST raid_state_function_test_sb 00:14:14.331 ************************************ 00:14:14.331 21:36:34 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 2 true 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=68977 00:14:14.331 Process raid pid: 68977 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 68977' 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 68977 /var/tmp/spdk-raid.sock 00:14:14.331 21:36:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:14.331 21:36:34 -- common/autotest_common.sh@829 -- # '[' -z 68977 ']' 00:14:14.331 21:36:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:14.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:14.331 21:36:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:14.331 21:36:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:14.331 21:36:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:14.331 21:36:34 -- common/autotest_common.sh@10 -- # set +x 00:14:14.331 [2024-12-06 21:36:34.558659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:14.331 [2024-12-06 21:36:34.558839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.331 [2024-12-06 21:36:34.736503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.591 [2024-12-06 21:36:34.927292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.849 [2024-12-06 21:36:35.101456] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:15.108 21:36:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.108 21:36:35 -- common/autotest_common.sh@862 -- # return 0 00:14:15.108 21:36:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:15.367 [2024-12-06 21:36:35.753212] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:15.367 [2024-12-06 21:36:35.753295] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:15.367 [2024-12-06 21:36:35.753311] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:15.367 [2024-12-06 21:36:35.753326] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.367 21:36:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.626 21:36:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.626 "name": "Existed_Raid", 00:14:15.626 "uuid": "be94d093-dcd5-40bf-a803-1204c3ce26cf", 00:14:15.626 "strip_size_kb": 64, 00:14:15.626 "state": "configuring", 00:14:15.626 "raid_level": "raid0", 00:14:15.626 "superblock": true, 00:14:15.626 "num_base_bdevs": 2, 00:14:15.626 "num_base_bdevs_discovered": 0, 00:14:15.626 "num_base_bdevs_operational": 2, 00:14:15.626 "base_bdevs_list": [ 00:14:15.626 { 00:14:15.626 "name": "BaseBdev1", 00:14:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.626 "is_configured": false, 00:14:15.626 "data_offset": 0, 00:14:15.626 "data_size": 0 00:14:15.626 }, 00:14:15.626 { 00:14:15.626 "name": "BaseBdev2", 00:14:15.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.626 "is_configured": false, 00:14:15.626 "data_offset": 0, 00:14:15.626 "data_size": 0 00:14:15.626 } 00:14:15.626 ] 00:14:15.626 }' 00:14:15.626 21:36:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.626 21:36:36 -- common/autotest_common.sh@10 -- # set +x 00:14:15.889 21:36:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:16.149 [2024-12-06 21:36:36.557214] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.149 [2024-12-06 21:36:36.557265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:16.149 21:36:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:16.408 [2024-12-06 21:36:36.777334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:16.408 [2024-12-06 21:36:36.777411] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:16.408 [2024-12-06 21:36:36.777450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:16.408 [2024-12-06 21:36:36.777481] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:16.408 21:36:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:16.667 [2024-12-06 21:36:37.031242] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:16.667 BaseBdev1 00:14:16.667 21:36:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:16.667 21:36:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:16.667 21:36:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:16.667 21:36:37 -- common/autotest_common.sh@899 -- # local i 00:14:16.667 21:36:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:16.667 21:36:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:16.667 21:36:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:16.925 21:36:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:17.184 [ 00:14:17.184 { 00:14:17.184 "name": "BaseBdev1", 00:14:17.184 "aliases": [ 00:14:17.184 "e3fc19d8-ed50-43cb-adb9-93ba5e2930c2" 00:14:17.184 ], 00:14:17.184 "product_name": "Malloc disk", 00:14:17.184 "block_size": 512, 00:14:17.184 "num_blocks": 65536, 00:14:17.184 "uuid": "e3fc19d8-ed50-43cb-adb9-93ba5e2930c2", 00:14:17.184 "assigned_rate_limits": { 00:14:17.184 "rw_ios_per_sec": 0, 00:14:17.184 "rw_mbytes_per_sec": 0, 00:14:17.184 "r_mbytes_per_sec": 0, 00:14:17.184 "w_mbytes_per_sec": 0 00:14:17.184 }, 00:14:17.184 "claimed": true, 00:14:17.184 "claim_type": "exclusive_write", 00:14:17.184 "zoned": false, 00:14:17.184 "supported_io_types": { 00:14:17.184 "read": true, 00:14:17.184 "write": true, 00:14:17.184 "unmap": true, 00:14:17.184 "write_zeroes": true, 00:14:17.184 "flush": true, 00:14:17.184 "reset": true, 00:14:17.184 "compare": false, 00:14:17.184 "compare_and_write": false, 00:14:17.184 "abort": true, 00:14:17.184 "nvme_admin": false, 00:14:17.184 "nvme_io": false 00:14:17.184 }, 00:14:17.184 "memory_domains": [ 00:14:17.184 { 00:14:17.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.184 "dma_device_type": 2 00:14:17.184 } 00:14:17.184 ], 00:14:17.184 "driver_specific": {} 00:14:17.184 } 00:14:17.184 ] 00:14:17.184 21:36:37 -- common/autotest_common.sh@905 -- # return 0 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:17.184 21:36:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:17.185 21:36:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:17.185 21:36:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:17.185 21:36:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.185 21:36:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.445 21:36:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:17.445 "name": "Existed_Raid", 00:14:17.445 "uuid": "0b1f2cd5-30d5-486a-b4d2-00896cce3064", 00:14:17.445 "strip_size_kb": 64, 00:14:17.445 "state": "configuring", 00:14:17.445 "raid_level": "raid0", 00:14:17.445 "superblock": true, 00:14:17.445 "num_base_bdevs": 2, 00:14:17.445 "num_base_bdevs_discovered": 1, 00:14:17.445 "num_base_bdevs_operational": 2, 00:14:17.445 "base_bdevs_list": [ 00:14:17.445 { 00:14:17.445 "name": "BaseBdev1", 00:14:17.445 "uuid": "e3fc19d8-ed50-43cb-adb9-93ba5e2930c2", 00:14:17.445 "is_configured": true, 00:14:17.445 "data_offset": 2048, 00:14:17.445 "data_size": 63488 00:14:17.445 }, 00:14:17.445 { 00:14:17.445 "name": "BaseBdev2", 00:14:17.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.445 "is_configured": false, 00:14:17.445 "data_offset": 0, 00:14:17.445 "data_size": 0 00:14:17.445 } 00:14:17.445 ] 00:14:17.445 }' 00:14:17.445 21:36:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:17.445 21:36:37 -- common/autotest_common.sh@10 -- # set +x 00:14:17.704 21:36:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:17.963 [2024-12-06 21:36:38.299739] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:17.963 [2024-12-06 21:36:38.299795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:17.963 21:36:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:17.963 21:36:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.222 21:36:38 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:18.482 BaseBdev1 00:14:18.482 21:36:38 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:18.482 21:36:38 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:18.482 21:36:38 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:18.482 21:36:38 -- common/autotest_common.sh@899 -- # local i 00:14:18.482 21:36:38 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:18.482 21:36:38 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:18.482 21:36:38 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:18.741 21:36:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.000 [ 00:14:19.000 { 00:14:19.000 "name": "BaseBdev1", 00:14:19.000 "aliases": [ 00:14:19.000 "525b8164-15e2-4638-9f8d-42bdcf3b3614" 00:14:19.000 ], 00:14:19.000 "product_name": "Malloc disk", 00:14:19.000 "block_size": 512, 00:14:19.000 "num_blocks": 65536, 00:14:19.000 "uuid": "525b8164-15e2-4638-9f8d-42bdcf3b3614", 00:14:19.000 "assigned_rate_limits": { 00:14:19.000 "rw_ios_per_sec": 0, 00:14:19.000 "rw_mbytes_per_sec": 0, 00:14:19.000 "r_mbytes_per_sec": 0, 00:14:19.000 "w_mbytes_per_sec": 0 00:14:19.000 }, 00:14:19.000 "claimed": false, 00:14:19.000 "zoned": false, 00:14:19.000 "supported_io_types": { 00:14:19.000 "read": true, 00:14:19.000 "write": true, 00:14:19.000 "unmap": true, 00:14:19.000 "write_zeroes": true, 00:14:19.000 "flush": true, 00:14:19.000 "reset": true, 00:14:19.000 "compare": false, 00:14:19.000 "compare_and_write": false, 00:14:19.000 "abort": true, 00:14:19.000 "nvme_admin": false, 00:14:19.000 "nvme_io": false 00:14:19.000 }, 00:14:19.000 "memory_domains": [ 00:14:19.000 { 00:14:19.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.000 "dma_device_type": 2 00:14:19.000 } 00:14:19.000 ], 00:14:19.000 "driver_specific": {} 00:14:19.000 } 00:14:19.000 ] 00:14:19.000 21:36:39 -- common/autotest_common.sh@905 -- # return 0 00:14:19.000 21:36:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:19.259 [2024-12-06 21:36:39.546517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.259 [2024-12-06 21:36:39.548618] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.260 [2024-12-06 21:36:39.548668] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.260 21:36:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.519 21:36:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.519 "name": "Existed_Raid", 00:14:19.519 "uuid": "7ead4b39-acb3-40c0-b597-719092f3d51a", 00:14:19.519 "strip_size_kb": 64, 00:14:19.519 "state": "configuring", 00:14:19.519 "raid_level": "raid0", 00:14:19.519 "superblock": true, 00:14:19.519 "num_base_bdevs": 2, 00:14:19.519 "num_base_bdevs_discovered": 1, 00:14:19.519 "num_base_bdevs_operational": 2, 00:14:19.519 "base_bdevs_list": [ 00:14:19.519 { 00:14:19.519 "name": "BaseBdev1", 00:14:19.519 "uuid": "525b8164-15e2-4638-9f8d-42bdcf3b3614", 00:14:19.519 "is_configured": true, 00:14:19.519 "data_offset": 2048, 00:14:19.519 "data_size": 63488 00:14:19.519 }, 00:14:19.519 { 00:14:19.519 "name": "BaseBdev2", 00:14:19.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.519 "is_configured": false, 00:14:19.519 "data_offset": 0, 00:14:19.519 "data_size": 0 00:14:19.519 } 00:14:19.519 ] 00:14:19.519 }' 00:14:19.519 21:36:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.519 21:36:39 -- common/autotest_common.sh@10 -- # set +x 00:14:19.779 21:36:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.038 [2024-12-06 21:36:40.341284] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.038 [2024-12-06 21:36:40.341568] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:20.038 [2024-12-06 21:36:40.341587] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:20.038 [2024-12-06 21:36:40.341709] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:20.038 [2024-12-06 21:36:40.342068] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:20.038 [2024-12-06 21:36:40.342091] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:14:20.038 BaseBdev2 00:14:20.038 [2024-12-06 21:36:40.342248] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.038 21:36:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:20.038 21:36:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:20.038 21:36:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.038 21:36:40 -- common/autotest_common.sh@899 -- # local i 00:14:20.038 21:36:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.038 21:36:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.038 21:36:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.297 21:36:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.556 [ 00:14:20.556 { 00:14:20.556 "name": "BaseBdev2", 00:14:20.556 "aliases": [ 00:14:20.556 "2b9b121b-f2f8-450b-b453-79fe17ababb1" 00:14:20.556 ], 00:14:20.556 "product_name": "Malloc disk", 00:14:20.556 "block_size": 512, 00:14:20.556 "num_blocks": 65536, 00:14:20.556 "uuid": "2b9b121b-f2f8-450b-b453-79fe17ababb1", 00:14:20.556 "assigned_rate_limits": { 00:14:20.556 "rw_ios_per_sec": 0, 00:14:20.556 "rw_mbytes_per_sec": 0, 00:14:20.556 "r_mbytes_per_sec": 0, 00:14:20.556 "w_mbytes_per_sec": 0 00:14:20.556 }, 00:14:20.556 "claimed": true, 00:14:20.556 "claim_type": "exclusive_write", 00:14:20.556 "zoned": false, 00:14:20.556 "supported_io_types": { 00:14:20.556 "read": true, 00:14:20.556 "write": true, 00:14:20.556 "unmap": true, 00:14:20.556 "write_zeroes": true, 00:14:20.556 "flush": true, 00:14:20.556 "reset": true, 00:14:20.556 "compare": false, 00:14:20.556 "compare_and_write": false, 00:14:20.556 "abort": true, 00:14:20.556 "nvme_admin": false, 00:14:20.556 "nvme_io": false 00:14:20.556 }, 00:14:20.556 "memory_domains": [ 00:14:20.556 { 00:14:20.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.556 "dma_device_type": 2 00:14:20.556 } 00:14:20.556 ], 00:14:20.556 "driver_specific": {} 00:14:20.556 } 00:14:20.556 ] 00:14:20.556 21:36:40 -- common/autotest_common.sh@905 -- # return 0 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.556 21:36:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.815 21:36:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:20.815 "name": "Existed_Raid", 00:14:20.815 "uuid": "7ead4b39-acb3-40c0-b597-719092f3d51a", 00:14:20.815 "strip_size_kb": 64, 00:14:20.815 "state": "online", 00:14:20.815 "raid_level": "raid0", 00:14:20.816 "superblock": true, 00:14:20.816 "num_base_bdevs": 2, 00:14:20.816 "num_base_bdevs_discovered": 2, 00:14:20.816 "num_base_bdevs_operational": 2, 00:14:20.816 "base_bdevs_list": [ 00:14:20.816 { 00:14:20.816 "name": "BaseBdev1", 00:14:20.816 "uuid": "525b8164-15e2-4638-9f8d-42bdcf3b3614", 00:14:20.816 "is_configured": true, 00:14:20.816 "data_offset": 2048, 00:14:20.816 "data_size": 63488 00:14:20.816 }, 00:14:20.816 { 00:14:20.816 "name": "BaseBdev2", 00:14:20.816 "uuid": "2b9b121b-f2f8-450b-b453-79fe17ababb1", 00:14:20.816 "is_configured": true, 00:14:20.816 "data_offset": 2048, 00:14:20.816 "data_size": 63488 00:14:20.816 } 00:14:20.816 ] 00:14:20.816 }' 00:14:20.816 21:36:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:20.816 21:36:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.075 21:36:41 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:21.335 [2024-12-06 21:36:41.601870] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.335 [2024-12-06 21:36:41.602124] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:21.335 [2024-12-06 21:36:41.602293] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.335 21:36:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.594 21:36:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.594 "name": "Existed_Raid", 00:14:21.594 "uuid": "7ead4b39-acb3-40c0-b597-719092f3d51a", 00:14:21.594 "strip_size_kb": 64, 00:14:21.594 "state": "offline", 00:14:21.594 "raid_level": "raid0", 00:14:21.594 "superblock": true, 00:14:21.594 "num_base_bdevs": 2, 00:14:21.594 "num_base_bdevs_discovered": 1, 00:14:21.594 "num_base_bdevs_operational": 1, 00:14:21.594 "base_bdevs_list": [ 00:14:21.594 { 00:14:21.594 "name": null, 00:14:21.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.594 "is_configured": false, 00:14:21.594 "data_offset": 2048, 00:14:21.594 "data_size": 63488 00:14:21.594 }, 00:14:21.594 { 00:14:21.594 "name": "BaseBdev2", 00:14:21.594 "uuid": "2b9b121b-f2f8-450b-b453-79fe17ababb1", 00:14:21.594 "is_configured": true, 00:14:21.594 "data_offset": 2048, 00:14:21.594 "data_size": 63488 00:14:21.594 } 00:14:21.594 ] 00:14:21.594 }' 00:14:21.594 21:36:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.594 21:36:41 -- common/autotest_common.sh@10 -- # set +x 00:14:21.853 21:36:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:21.853 21:36:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:21.853 21:36:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:21.853 21:36:42 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.113 21:36:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:22.113 21:36:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.113 21:36:42 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:22.372 [2024-12-06 21:36:42.720741] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.372 [2024-12-06 21:36:42.720833] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:22.372 21:36:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:22.372 21:36:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:22.372 21:36:42 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.372 21:36:42 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.630 21:36:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:22.630 21:36:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:22.630 21:36:43 -- bdev/bdev_raid.sh@287 -- # killprocess 68977 00:14:22.630 21:36:43 -- common/autotest_common.sh@936 -- # '[' -z 68977 ']' 00:14:22.630 21:36:43 -- common/autotest_common.sh@940 -- # kill -0 68977 00:14:22.630 21:36:43 -- common/autotest_common.sh@941 -- # uname 00:14:22.630 21:36:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:22.630 21:36:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68977 00:14:22.630 killing process with pid 68977 00:14:22.630 21:36:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:22.630 21:36:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:22.630 21:36:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68977' 00:14:22.630 21:36:43 -- common/autotest_common.sh@955 -- # kill 68977 00:14:22.630 [2024-12-06 21:36:43.126074] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:22.630 21:36:43 -- common/autotest_common.sh@960 -- # wait 68977 00:14:22.630 [2024-12-06 21:36:43.126202] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:24.005 ************************************ 00:14:24.005 END TEST raid_state_function_test_sb 00:14:24.005 ************************************ 00:14:24.005 21:36:44 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:24.005 00:14:24.005 real 0m9.784s 00:14:24.005 user 0m16.028s 00:14:24.005 sys 0m1.419s 00:14:24.005 21:36:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:24.005 21:36:44 -- common/autotest_common.sh@10 -- # set +x 00:14:24.005 21:36:44 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:24.006 21:36:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:24.006 21:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:24.006 21:36:44 -- common/autotest_common.sh@10 -- # set +x 00:14:24.006 ************************************ 00:14:24.006 START TEST raid_superblock_test 00:14:24.006 ************************************ 00:14:24.006 21:36:44 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 2 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@357 -- # raid_pid=69277 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@358 -- # waitforlisten 69277 /var/tmp/spdk-raid.sock 00:14:24.006 21:36:44 -- common/autotest_common.sh@829 -- # '[' -z 69277 ']' 00:14:24.006 21:36:44 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:24.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:24.006 21:36:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:24.006 21:36:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:24.006 21:36:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:24.006 21:36:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:24.006 21:36:44 -- common/autotest_common.sh@10 -- # set +x 00:14:24.006 [2024-12-06 21:36:44.396787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:24.006 [2024-12-06 21:36:44.396988] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69277 ] 00:14:24.300 [2024-12-06 21:36:44.573371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.557 [2024-12-06 21:36:44.815334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.557 [2024-12-06 21:36:45.004234] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:25.125 21:36:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:25.125 21:36:45 -- common/autotest_common.sh@862 -- # return 0 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:25.125 21:36:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:25.392 malloc1 00:14:25.392 21:36:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:25.677 [2024-12-06 21:36:45.935206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:25.677 [2024-12-06 21:36:45.935314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.677 [2024-12-06 21:36:45.935358] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:25.677 [2024-12-06 21:36:45.935373] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.677 [2024-12-06 21:36:45.937878] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.677 [2024-12-06 21:36:45.937921] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:25.677 pt1 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:25.677 21:36:45 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:25.994 malloc2 00:14:25.994 21:36:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:25.994 [2024-12-06 21:36:46.485069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:25.994 [2024-12-06 21:36:46.485161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:25.994 [2024-12-06 21:36:46.485196] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:25.994 [2024-12-06 21:36:46.485211] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:25.994 [2024-12-06 21:36:46.487917] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:25.994 [2024-12-06 21:36:46.487995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:26.252 pt2 00:14:26.252 21:36:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:26.252 21:36:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:26.252 21:36:46 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:26.252 [2024-12-06 21:36:46.745171] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:26.252 [2024-12-06 21:36:46.747683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:26.252 [2024-12-06 21:36:46.748060] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:26.252 [2024-12-06 21:36:46.748209] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:26.252 [2024-12-06 21:36:46.748407] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:26.252 [2024-12-06 21:36:46.748932] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:26.252 [2024-12-06 21:36:46.749108] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:26.252 [2024-12-06 21:36:46.749515] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.511 21:36:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.770 21:36:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.770 "name": "raid_bdev1", 00:14:26.770 "uuid": "71a777d9-38eb-4032-9f54-7bb38111a233", 00:14:26.770 "strip_size_kb": 64, 00:14:26.770 "state": "online", 00:14:26.770 "raid_level": "raid0", 00:14:26.770 "superblock": true, 00:14:26.770 "num_base_bdevs": 2, 00:14:26.770 "num_base_bdevs_discovered": 2, 00:14:26.770 "num_base_bdevs_operational": 2, 00:14:26.770 "base_bdevs_list": [ 00:14:26.770 { 00:14:26.770 "name": "pt1", 00:14:26.770 "uuid": "7da47cee-a679-582a-9db7-9ccfd7826a2f", 00:14:26.770 "is_configured": true, 00:14:26.770 "data_offset": 2048, 00:14:26.770 "data_size": 63488 00:14:26.770 }, 00:14:26.770 { 00:14:26.770 "name": "pt2", 00:14:26.770 "uuid": "eb3913c4-7795-5cb0-be49-98cef078ef65", 00:14:26.770 "is_configured": true, 00:14:26.770 "data_offset": 2048, 00:14:26.770 "data_size": 63488 00:14:26.770 } 00:14:26.770 ] 00:14:26.770 }' 00:14:26.770 21:36:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.770 21:36:47 -- common/autotest_common.sh@10 -- # set +x 00:14:27.029 21:36:47 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:27.029 21:36:47 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:27.288 [2024-12-06 21:36:47.554102] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:27.289 21:36:47 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=71a777d9-38eb-4032-9f54-7bb38111a233 00:14:27.289 21:36:47 -- bdev/bdev_raid.sh@380 -- # '[' -z 71a777d9-38eb-4032-9f54-7bb38111a233 ']' 00:14:27.289 21:36:47 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:27.548 [2024-12-06 21:36:47.825887] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.548 [2024-12-06 21:36:47.826206] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.548 [2024-12-06 21:36:47.826333] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.548 [2024-12-06 21:36:47.826400] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.548 [2024-12-06 21:36:47.826416] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:27.548 21:36:47 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:27.548 21:36:47 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.807 21:36:48 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:27.807 21:36:48 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:27.807 21:36:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:27.807 21:36:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:28.066 21:36:48 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:28.066 21:36:48 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:28.326 21:36:48 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:28.326 21:36:48 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:28.326 21:36:48 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:28.326 21:36:48 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:28.326 21:36:48 -- common/autotest_common.sh@650 -- # local es=0 00:14:28.326 21:36:48 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:28.326 21:36:48 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.326 21:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.326 21:36:48 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.326 21:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.326 21:36:48 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.326 21:36:48 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:28.326 21:36:48 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.326 21:36:48 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:28.326 21:36:48 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:28.586 [2024-12-06 21:36:49.046299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:28.586 [2024-12-06 21:36:49.048625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:28.586 [2024-12-06 21:36:49.048905] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:28.586 [2024-12-06 21:36:49.048987] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:28.586 [2024-12-06 21:36:49.049025] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:28.586 [2024-12-06 21:36:49.049051] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:28.586 request: 00:14:28.586 { 00:14:28.586 "name": "raid_bdev1", 00:14:28.586 "raid_level": "raid0", 00:14:28.586 "base_bdevs": [ 00:14:28.586 "malloc1", 00:14:28.586 "malloc2" 00:14:28.586 ], 00:14:28.586 "superblock": false, 00:14:28.586 "strip_size_kb": 64, 00:14:28.586 "method": "bdev_raid_create", 00:14:28.586 "req_id": 1 00:14:28.586 } 00:14:28.586 Got JSON-RPC error response 00:14:28.586 response: 00:14:28.586 { 00:14:28.586 "code": -17, 00:14:28.586 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:28.586 } 00:14:28.586 21:36:49 -- common/autotest_common.sh@653 -- # es=1 00:14:28.586 21:36:49 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:28.586 21:36:49 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:28.586 21:36:49 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:28.586 21:36:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.586 21:36:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:28.845 21:36:49 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:28.845 21:36:49 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:28.845 21:36:49 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.105 [2024-12-06 21:36:49.494472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.105 [2024-12-06 21:36:49.494764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.105 [2024-12-06 21:36:49.494847] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:29.105 [2024-12-06 21:36:49.494981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.105 [2024-12-06 21:36:49.497644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.105 [2024-12-06 21:36:49.497692] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.105 [2024-12-06 21:36:49.497807] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:29.105 [2024-12-06 21:36:49.497898] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.105 pt1 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.105 21:36:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.364 21:36:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:29.364 "name": "raid_bdev1", 00:14:29.364 "uuid": "71a777d9-38eb-4032-9f54-7bb38111a233", 00:14:29.364 "strip_size_kb": 64, 00:14:29.364 "state": "configuring", 00:14:29.364 "raid_level": "raid0", 00:14:29.364 "superblock": true, 00:14:29.364 "num_base_bdevs": 2, 00:14:29.364 "num_base_bdevs_discovered": 1, 00:14:29.364 "num_base_bdevs_operational": 2, 00:14:29.364 "base_bdevs_list": [ 00:14:29.364 { 00:14:29.364 "name": "pt1", 00:14:29.364 "uuid": "7da47cee-a679-582a-9db7-9ccfd7826a2f", 00:14:29.364 "is_configured": true, 00:14:29.364 "data_offset": 2048, 00:14:29.364 "data_size": 63488 00:14:29.364 }, 00:14:29.364 { 00:14:29.364 "name": null, 00:14:29.364 "uuid": "eb3913c4-7795-5cb0-be49-98cef078ef65", 00:14:29.364 "is_configured": false, 00:14:29.364 "data_offset": 2048, 00:14:29.364 "data_size": 63488 00:14:29.364 } 00:14:29.364 ] 00:14:29.364 }' 00:14:29.364 21:36:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:29.364 21:36:49 -- common/autotest_common.sh@10 -- # set +x 00:14:29.623 21:36:50 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:29.623 21:36:50 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:29.623 21:36:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:29.623 21:36:50 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.883 [2024-12-06 21:36:50.330799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.883 [2024-12-06 21:36:50.330908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.883 [2024-12-06 21:36:50.330951] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:29.883 [2024-12-06 21:36:50.330981] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.883 [2024-12-06 21:36:50.331522] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.883 [2024-12-06 21:36:50.331561] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.883 [2024-12-06 21:36:50.331673] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:29.883 [2024-12-06 21:36:50.331703] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.883 [2024-12-06 21:36:50.331843] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:29.883 [2024-12-06 21:36:50.331859] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.883 [2024-12-06 21:36:50.332037] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:29.883 [2024-12-06 21:36:50.332415] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:29.883 [2024-12-06 21:36:50.332461] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:29.883 [2024-12-06 21:36:50.332626] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.883 pt2 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.883 21:36:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.142 21:36:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.142 "name": "raid_bdev1", 00:14:30.142 "uuid": "71a777d9-38eb-4032-9f54-7bb38111a233", 00:14:30.142 "strip_size_kb": 64, 00:14:30.142 "state": "online", 00:14:30.142 "raid_level": "raid0", 00:14:30.142 "superblock": true, 00:14:30.142 "num_base_bdevs": 2, 00:14:30.142 "num_base_bdevs_discovered": 2, 00:14:30.142 "num_base_bdevs_operational": 2, 00:14:30.142 "base_bdevs_list": [ 00:14:30.142 { 00:14:30.142 "name": "pt1", 00:14:30.142 "uuid": "7da47cee-a679-582a-9db7-9ccfd7826a2f", 00:14:30.142 "is_configured": true, 00:14:30.142 "data_offset": 2048, 00:14:30.142 "data_size": 63488 00:14:30.142 }, 00:14:30.142 { 00:14:30.142 "name": "pt2", 00:14:30.142 "uuid": "eb3913c4-7795-5cb0-be49-98cef078ef65", 00:14:30.142 "is_configured": true, 00:14:30.142 "data_offset": 2048, 00:14:30.142 "data_size": 63488 00:14:30.142 } 00:14:30.142 ] 00:14:30.142 }' 00:14:30.142 21:36:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.142 21:36:50 -- common/autotest_common.sh@10 -- # set +x 00:14:30.709 21:36:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:30.709 21:36:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:30.709 [2024-12-06 21:36:51.167310] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.710 21:36:51 -- bdev/bdev_raid.sh@430 -- # '[' 71a777d9-38eb-4032-9f54-7bb38111a233 '!=' 71a777d9-38eb-4032-9f54-7bb38111a233 ']' 00:14:30.710 21:36:51 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:30.710 21:36:51 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:30.710 21:36:51 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:30.710 21:36:51 -- bdev/bdev_raid.sh@511 -- # killprocess 69277 00:14:30.710 21:36:51 -- common/autotest_common.sh@936 -- # '[' -z 69277 ']' 00:14:30.710 21:36:51 -- common/autotest_common.sh@940 -- # kill -0 69277 00:14:30.710 21:36:51 -- common/autotest_common.sh@941 -- # uname 00:14:30.710 21:36:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:30.710 21:36:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69277 00:14:30.969 killing process with pid 69277 00:14:30.969 21:36:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:30.969 21:36:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:30.969 21:36:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69277' 00:14:30.969 21:36:51 -- common/autotest_common.sh@955 -- # kill 69277 00:14:30.969 [2024-12-06 21:36:51.219256] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:30.969 21:36:51 -- common/autotest_common.sh@960 -- # wait 69277 00:14:30.969 [2024-12-06 21:36:51.219353] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.969 [2024-12-06 21:36:51.219410] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.969 [2024-12-06 21:36:51.219432] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:30.969 [2024-12-06 21:36:51.384669] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.346 ************************************ 00:14:32.346 END TEST raid_superblock_test 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:32.346 00:14:32.346 real 0m8.206s 00:14:32.346 user 0m13.248s 00:14:32.346 sys 0m1.120s 00:14:32.346 21:36:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:32.346 21:36:52 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 ************************************ 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:32.346 21:36:52 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:32.346 21:36:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:32.346 21:36:52 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 ************************************ 00:14:32.346 START TEST raid_state_function_test 00:14:32.346 ************************************ 00:14:32.346 21:36:52 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 false 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=69512 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69512' 00:14:32.346 Process raid pid: 69512 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:32.346 21:36:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69512 /var/tmp/spdk-raid.sock 00:14:32.346 21:36:52 -- common/autotest_common.sh@829 -- # '[' -z 69512 ']' 00:14:32.346 21:36:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.346 21:36:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.346 21:36:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.346 21:36:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.346 21:36:52 -- common/autotest_common.sh@10 -- # set +x 00:14:32.346 [2024-12-06 21:36:52.651202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:32.346 [2024-12-06 21:36:52.651341] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.346 [2024-12-06 21:36:52.810045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.605 [2024-12-06 21:36:53.035595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.864 [2024-12-06 21:36:53.215101] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.123 21:36:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.123 21:36:53 -- common/autotest_common.sh@862 -- # return 0 00:14:33.123 21:36:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:33.381 [2024-12-06 21:36:53.809852] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.382 [2024-12-06 21:36:53.809936] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.382 [2024-12-06 21:36:53.809968] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.382 [2024-12-06 21:36:53.809998] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.382 21:36:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.640 21:36:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:33.640 "name": "Existed_Raid", 00:14:33.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.640 "strip_size_kb": 64, 00:14:33.640 "state": "configuring", 00:14:33.640 "raid_level": "concat", 00:14:33.640 "superblock": false, 00:14:33.640 "num_base_bdevs": 2, 00:14:33.640 "num_base_bdevs_discovered": 0, 00:14:33.640 "num_base_bdevs_operational": 2, 00:14:33.640 "base_bdevs_list": [ 00:14:33.640 { 00:14:33.640 "name": "BaseBdev1", 00:14:33.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.640 "is_configured": false, 00:14:33.640 "data_offset": 0, 00:14:33.640 "data_size": 0 00:14:33.640 }, 00:14:33.640 { 00:14:33.640 "name": "BaseBdev2", 00:14:33.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.640 "is_configured": false, 00:14:33.640 "data_offset": 0, 00:14:33.640 "data_size": 0 00:14:33.640 } 00:14:33.640 ] 00:14:33.640 }' 00:14:33.640 21:36:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:33.640 21:36:54 -- common/autotest_common.sh@10 -- # set +x 00:14:33.899 21:36:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:34.158 [2024-12-06 21:36:54.641967] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.158 [2024-12-06 21:36:54.642030] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:34.417 21:36:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:34.417 [2024-12-06 21:36:54.898082] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.417 [2024-12-06 21:36:54.898163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.417 [2024-12-06 21:36:54.898187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.417 [2024-12-06 21:36:54.898205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.676 21:36:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.676 [2024-12-06 21:36:55.166950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.676 BaseBdev1 00:14:34.934 21:36:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:34.934 21:36:55 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:34.934 21:36:55 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.934 21:36:55 -- common/autotest_common.sh@899 -- # local i 00:14:34.934 21:36:55 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.934 21:36:55 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.934 21:36:55 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.934 21:36:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.192 [ 00:14:35.192 { 00:14:35.192 "name": "BaseBdev1", 00:14:35.192 "aliases": [ 00:14:35.192 "528b2957-9b78-4770-8539-58741b0a1452" 00:14:35.192 ], 00:14:35.192 "product_name": "Malloc disk", 00:14:35.192 "block_size": 512, 00:14:35.192 "num_blocks": 65536, 00:14:35.192 "uuid": "528b2957-9b78-4770-8539-58741b0a1452", 00:14:35.192 "assigned_rate_limits": { 00:14:35.192 "rw_ios_per_sec": 0, 00:14:35.192 "rw_mbytes_per_sec": 0, 00:14:35.192 "r_mbytes_per_sec": 0, 00:14:35.192 "w_mbytes_per_sec": 0 00:14:35.192 }, 00:14:35.192 "claimed": true, 00:14:35.192 "claim_type": "exclusive_write", 00:14:35.192 "zoned": false, 00:14:35.192 "supported_io_types": { 00:14:35.192 "read": true, 00:14:35.192 "write": true, 00:14:35.192 "unmap": true, 00:14:35.192 "write_zeroes": true, 00:14:35.192 "flush": true, 00:14:35.192 "reset": true, 00:14:35.192 "compare": false, 00:14:35.192 "compare_and_write": false, 00:14:35.192 "abort": true, 00:14:35.192 "nvme_admin": false, 00:14:35.192 "nvme_io": false 00:14:35.192 }, 00:14:35.192 "memory_domains": [ 00:14:35.192 { 00:14:35.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.192 "dma_device_type": 2 00:14:35.192 } 00:14:35.192 ], 00:14:35.192 "driver_specific": {} 00:14:35.192 } 00:14:35.192 ] 00:14:35.192 21:36:55 -- common/autotest_common.sh@905 -- # return 0 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:35.192 21:36:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:35.193 21:36:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:35.193 21:36:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:35.193 21:36:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.193 21:36:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.451 21:36:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:35.451 "name": "Existed_Raid", 00:14:35.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.451 "strip_size_kb": 64, 00:14:35.451 "state": "configuring", 00:14:35.451 "raid_level": "concat", 00:14:35.451 "superblock": false, 00:14:35.451 "num_base_bdevs": 2, 00:14:35.451 "num_base_bdevs_discovered": 1, 00:14:35.451 "num_base_bdevs_operational": 2, 00:14:35.451 "base_bdevs_list": [ 00:14:35.451 { 00:14:35.451 "name": "BaseBdev1", 00:14:35.451 "uuid": "528b2957-9b78-4770-8539-58741b0a1452", 00:14:35.451 "is_configured": true, 00:14:35.451 "data_offset": 0, 00:14:35.451 "data_size": 65536 00:14:35.451 }, 00:14:35.451 { 00:14:35.451 "name": "BaseBdev2", 00:14:35.451 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.451 "is_configured": false, 00:14:35.451 "data_offset": 0, 00:14:35.451 "data_size": 0 00:14:35.451 } 00:14:35.451 ] 00:14:35.451 }' 00:14:35.451 21:36:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:35.451 21:36:55 -- common/autotest_common.sh@10 -- # set +x 00:14:35.710 21:36:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.969 [2024-12-06 21:36:56.387480] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.969 [2024-12-06 21:36:56.387584] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:35.969 21:36:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:35.969 21:36:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:36.227 [2024-12-06 21:36:56.587668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.227 [2024-12-06 21:36:56.589719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.227 [2024-12-06 21:36:56.589801] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.227 21:36:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.487 21:36:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.487 "name": "Existed_Raid", 00:14:36.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.487 "strip_size_kb": 64, 00:14:36.487 "state": "configuring", 00:14:36.487 "raid_level": "concat", 00:14:36.487 "superblock": false, 00:14:36.487 "num_base_bdevs": 2, 00:14:36.487 "num_base_bdevs_discovered": 1, 00:14:36.487 "num_base_bdevs_operational": 2, 00:14:36.487 "base_bdevs_list": [ 00:14:36.487 { 00:14:36.487 "name": "BaseBdev1", 00:14:36.487 "uuid": "528b2957-9b78-4770-8539-58741b0a1452", 00:14:36.487 "is_configured": true, 00:14:36.487 "data_offset": 0, 00:14:36.487 "data_size": 65536 00:14:36.487 }, 00:14:36.487 { 00:14:36.487 "name": "BaseBdev2", 00:14:36.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.487 "is_configured": false, 00:14:36.487 "data_offset": 0, 00:14:36.487 "data_size": 0 00:14:36.487 } 00:14:36.487 ] 00:14:36.487 }' 00:14:36.487 21:36:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.487 21:36:56 -- common/autotest_common.sh@10 -- # set +x 00:14:36.745 21:36:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:37.004 [2024-12-06 21:36:57.402844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.004 [2024-12-06 21:36:57.402935] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:14:37.004 [2024-12-06 21:36:57.402949] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:37.004 [2024-12-06 21:36:57.403090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:37.004 [2024-12-06 21:36:57.403457] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:14:37.004 [2024-12-06 21:36:57.403523] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:14:37.004 [2024-12-06 21:36:57.403806] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.004 BaseBdev2 00:14:37.004 21:36:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:37.004 21:36:57 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:37.004 21:36:57 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:37.004 21:36:57 -- common/autotest_common.sh@899 -- # local i 00:14:37.004 21:36:57 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:37.004 21:36:57 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:37.004 21:36:57 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.264 21:36:57 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.523 [ 00:14:37.523 { 00:14:37.523 "name": "BaseBdev2", 00:14:37.523 "aliases": [ 00:14:37.523 "f9d606b8-8b27-4985-9162-1c92102feb3f" 00:14:37.523 ], 00:14:37.523 "product_name": "Malloc disk", 00:14:37.523 "block_size": 512, 00:14:37.523 "num_blocks": 65536, 00:14:37.523 "uuid": "f9d606b8-8b27-4985-9162-1c92102feb3f", 00:14:37.523 "assigned_rate_limits": { 00:14:37.523 "rw_ios_per_sec": 0, 00:14:37.523 "rw_mbytes_per_sec": 0, 00:14:37.523 "r_mbytes_per_sec": 0, 00:14:37.523 "w_mbytes_per_sec": 0 00:14:37.523 }, 00:14:37.523 "claimed": true, 00:14:37.523 "claim_type": "exclusive_write", 00:14:37.523 "zoned": false, 00:14:37.523 "supported_io_types": { 00:14:37.523 "read": true, 00:14:37.523 "write": true, 00:14:37.523 "unmap": true, 00:14:37.523 "write_zeroes": true, 00:14:37.523 "flush": true, 00:14:37.523 "reset": true, 00:14:37.523 "compare": false, 00:14:37.523 "compare_and_write": false, 00:14:37.523 "abort": true, 00:14:37.523 "nvme_admin": false, 00:14:37.523 "nvme_io": false 00:14:37.523 }, 00:14:37.523 "memory_domains": [ 00:14:37.523 { 00:14:37.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.523 "dma_device_type": 2 00:14:37.523 } 00:14:37.523 ], 00:14:37.523 "driver_specific": {} 00:14:37.523 } 00:14:37.523 ] 00:14:37.523 21:36:57 -- common/autotest_common.sh@905 -- # return 0 00:14:37.523 21:36:57 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:37.523 21:36:57 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:37.523 21:36:57 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.524 21:36:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.782 21:36:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.782 "name": "Existed_Raid", 00:14:37.782 "uuid": "f223b624-c80e-4163-86f7-b4e1d65e53bf", 00:14:37.782 "strip_size_kb": 64, 00:14:37.782 "state": "online", 00:14:37.782 "raid_level": "concat", 00:14:37.782 "superblock": false, 00:14:37.782 "num_base_bdevs": 2, 00:14:37.783 "num_base_bdevs_discovered": 2, 00:14:37.783 "num_base_bdevs_operational": 2, 00:14:37.783 "base_bdevs_list": [ 00:14:37.783 { 00:14:37.783 "name": "BaseBdev1", 00:14:37.783 "uuid": "528b2957-9b78-4770-8539-58741b0a1452", 00:14:37.783 "is_configured": true, 00:14:37.783 "data_offset": 0, 00:14:37.783 "data_size": 65536 00:14:37.783 }, 00:14:37.783 { 00:14:37.783 "name": "BaseBdev2", 00:14:37.783 "uuid": "f9d606b8-8b27-4985-9162-1c92102feb3f", 00:14:37.783 "is_configured": true, 00:14:37.783 "data_offset": 0, 00:14:37.783 "data_size": 65536 00:14:37.783 } 00:14:37.783 ] 00:14:37.783 }' 00:14:37.783 21:36:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.783 21:36:58 -- common/autotest_common.sh@10 -- # set +x 00:14:38.041 21:36:58 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:38.300 [2024-12-06 21:36:58.647449] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.300 [2024-12-06 21:36:58.647518] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.300 [2024-12-06 21:36:58.647592] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.300 21:36:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.559 21:36:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:38.559 "name": "Existed_Raid", 00:14:38.559 "uuid": "f223b624-c80e-4163-86f7-b4e1d65e53bf", 00:14:38.559 "strip_size_kb": 64, 00:14:38.559 "state": "offline", 00:14:38.559 "raid_level": "concat", 00:14:38.559 "superblock": false, 00:14:38.559 "num_base_bdevs": 2, 00:14:38.559 "num_base_bdevs_discovered": 1, 00:14:38.559 "num_base_bdevs_operational": 1, 00:14:38.559 "base_bdevs_list": [ 00:14:38.559 { 00:14:38.559 "name": null, 00:14:38.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.559 "is_configured": false, 00:14:38.559 "data_offset": 0, 00:14:38.559 "data_size": 65536 00:14:38.559 }, 00:14:38.559 { 00:14:38.560 "name": "BaseBdev2", 00:14:38.560 "uuid": "f9d606b8-8b27-4985-9162-1c92102feb3f", 00:14:38.560 "is_configured": true, 00:14:38.560 "data_offset": 0, 00:14:38.560 "data_size": 65536 00:14:38.560 } 00:14:38.560 ] 00:14:38.560 }' 00:14:38.560 21:36:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:38.560 21:36:58 -- common/autotest_common.sh@10 -- # set +x 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:39.128 21:36:59 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:39.387 [2024-12-06 21:36:59.771492] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.387 [2024-12-06 21:36:59.771590] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:14:39.387 21:36:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:39.387 21:36:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:39.387 21:36:59 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.387 21:36:59 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.646 21:37:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:39.646 21:37:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:39.646 21:37:00 -- bdev/bdev_raid.sh@287 -- # killprocess 69512 00:14:39.646 21:37:00 -- common/autotest_common.sh@936 -- # '[' -z 69512 ']' 00:14:39.646 21:37:00 -- common/autotest_common.sh@940 -- # kill -0 69512 00:14:39.646 21:37:00 -- common/autotest_common.sh@941 -- # uname 00:14:39.646 21:37:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.646 21:37:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69512 00:14:39.646 21:37:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.646 21:37:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.646 killing process with pid 69512 00:14:39.646 21:37:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69512' 00:14:39.646 21:37:00 -- common/autotest_common.sh@955 -- # kill 69512 00:14:39.646 [2024-12-06 21:37:00.128841] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.646 21:37:00 -- common/autotest_common.sh@960 -- # wait 69512 00:14:39.646 [2024-12-06 21:37:00.128970] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:41.025 00:14:41.025 real 0m8.661s 00:14:41.025 user 0m14.033s 00:14:41.025 sys 0m1.335s 00:14:41.025 21:37:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:41.025 ************************************ 00:14:41.025 END TEST raid_state_function_test 00:14:41.025 ************************************ 00:14:41.025 21:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:41.025 21:37:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:41.025 21:37:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.025 21:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.025 ************************************ 00:14:41.025 START TEST raid_state_function_test_sb 00:14:41.025 ************************************ 00:14:41.025 21:37:01 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 2 true 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@226 -- # raid_pid=69798 00:14:41.025 Process raid pid: 69798 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 69798' 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:41.025 21:37:01 -- bdev/bdev_raid.sh@228 -- # waitforlisten 69798 /var/tmp/spdk-raid.sock 00:14:41.025 21:37:01 -- common/autotest_common.sh@829 -- # '[' -z 69798 ']' 00:14:41.025 21:37:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:41.025 21:37:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:41.025 21:37:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:41.025 21:37:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.025 21:37:01 -- common/autotest_common.sh@10 -- # set +x 00:14:41.025 [2024-12-06 21:37:01.373953] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:41.025 [2024-12-06 21:37:01.374136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.284 [2024-12-06 21:37:01.546972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.284 [2024-12-06 21:37:01.732289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.583 [2024-12-06 21:37:01.913549] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.873 21:37:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.873 21:37:02 -- common/autotest_common.sh@862 -- # return 0 00:14:41.874 21:37:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:42.132 [2024-12-06 21:37:02.527965] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:42.132 [2024-12-06 21:37:02.528064] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:42.132 [2024-12-06 21:37:02.528087] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:42.133 [2024-12-06 21:37:02.528104] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.133 21:37:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.392 21:37:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.392 "name": "Existed_Raid", 00:14:42.392 "uuid": "4c8c469f-3ed2-4d87-a416-b8c8f96075fc", 00:14:42.392 "strip_size_kb": 64, 00:14:42.392 "state": "configuring", 00:14:42.392 "raid_level": "concat", 00:14:42.392 "superblock": true, 00:14:42.392 "num_base_bdevs": 2, 00:14:42.392 "num_base_bdevs_discovered": 0, 00:14:42.392 "num_base_bdevs_operational": 2, 00:14:42.392 "base_bdevs_list": [ 00:14:42.392 { 00:14:42.392 "name": "BaseBdev1", 00:14:42.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.392 "is_configured": false, 00:14:42.392 "data_offset": 0, 00:14:42.392 "data_size": 0 00:14:42.392 }, 00:14:42.392 { 00:14:42.392 "name": "BaseBdev2", 00:14:42.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.392 "is_configured": false, 00:14:42.392 "data_offset": 0, 00:14:42.392 "data_size": 0 00:14:42.392 } 00:14:42.392 ] 00:14:42.392 }' 00:14:42.392 21:37:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.393 21:37:02 -- common/autotest_common.sh@10 -- # set +x 00:14:42.652 21:37:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:42.911 [2024-12-06 21:37:03.323991] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:42.911 [2024-12-06 21:37:03.324049] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:14:42.911 21:37:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:43.170 [2024-12-06 21:37:03.580143] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:43.170 [2024-12-06 21:37:03.580218] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:43.170 [2024-12-06 21:37:03.580242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:43.170 [2024-12-06 21:37:03.580258] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:43.170 21:37:03 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:43.428 [2024-12-06 21:37:03.861115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:43.428 BaseBdev1 00:14:43.428 21:37:03 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:43.428 21:37:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:43.428 21:37:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:43.428 21:37:03 -- common/autotest_common.sh@899 -- # local i 00:14:43.428 21:37:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:43.428 21:37:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:43.428 21:37:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:43.687 21:37:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:43.946 [ 00:14:43.946 { 00:14:43.946 "name": "BaseBdev1", 00:14:43.946 "aliases": [ 00:14:43.946 "be1a3805-b079-4f00-b0a9-7f57c0dd8b5e" 00:14:43.946 ], 00:14:43.946 "product_name": "Malloc disk", 00:14:43.946 "block_size": 512, 00:14:43.946 "num_blocks": 65536, 00:14:43.946 "uuid": "be1a3805-b079-4f00-b0a9-7f57c0dd8b5e", 00:14:43.946 "assigned_rate_limits": { 00:14:43.946 "rw_ios_per_sec": 0, 00:14:43.946 "rw_mbytes_per_sec": 0, 00:14:43.946 "r_mbytes_per_sec": 0, 00:14:43.946 "w_mbytes_per_sec": 0 00:14:43.946 }, 00:14:43.946 "claimed": true, 00:14:43.946 "claim_type": "exclusive_write", 00:14:43.946 "zoned": false, 00:14:43.946 "supported_io_types": { 00:14:43.946 "read": true, 00:14:43.946 "write": true, 00:14:43.946 "unmap": true, 00:14:43.946 "write_zeroes": true, 00:14:43.946 "flush": true, 00:14:43.946 "reset": true, 00:14:43.946 "compare": false, 00:14:43.946 "compare_and_write": false, 00:14:43.946 "abort": true, 00:14:43.946 "nvme_admin": false, 00:14:43.946 "nvme_io": false 00:14:43.946 }, 00:14:43.946 "memory_domains": [ 00:14:43.946 { 00:14:43.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.946 "dma_device_type": 2 00:14:43.946 } 00:14:43.946 ], 00:14:43.946 "driver_specific": {} 00:14:43.946 } 00:14:43.946 ] 00:14:43.946 21:37:04 -- common/autotest_common.sh@905 -- # return 0 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.946 21:37:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:44.205 21:37:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:44.205 "name": "Existed_Raid", 00:14:44.205 "uuid": "db6ee05f-4eac-4c83-9638-3519cb1706ae", 00:14:44.205 "strip_size_kb": 64, 00:14:44.205 "state": "configuring", 00:14:44.205 "raid_level": "concat", 00:14:44.205 "superblock": true, 00:14:44.205 "num_base_bdevs": 2, 00:14:44.205 "num_base_bdevs_discovered": 1, 00:14:44.205 "num_base_bdevs_operational": 2, 00:14:44.205 "base_bdevs_list": [ 00:14:44.205 { 00:14:44.205 "name": "BaseBdev1", 00:14:44.205 "uuid": "be1a3805-b079-4f00-b0a9-7f57c0dd8b5e", 00:14:44.205 "is_configured": true, 00:14:44.205 "data_offset": 2048, 00:14:44.205 "data_size": 63488 00:14:44.205 }, 00:14:44.205 { 00:14:44.205 "name": "BaseBdev2", 00:14:44.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.205 "is_configured": false, 00:14:44.205 "data_offset": 0, 00:14:44.205 "data_size": 0 00:14:44.205 } 00:14:44.205 ] 00:14:44.205 }' 00:14:44.205 21:37:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:44.205 21:37:04 -- common/autotest_common.sh@10 -- # set +x 00:14:44.464 21:37:04 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:44.723 [2024-12-06 21:37:05.053482] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:44.723 [2024-12-06 21:37:05.053570] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:14:44.723 21:37:05 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:44.723 21:37:05 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:44.980 21:37:05 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:45.237 BaseBdev1 00:14:45.237 21:37:05 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:45.237 21:37:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:45.237 21:37:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.237 21:37:05 -- common/autotest_common.sh@899 -- # local i 00:14:45.237 21:37:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.237 21:37:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.237 21:37:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.495 21:37:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:45.753 [ 00:14:45.753 { 00:14:45.753 "name": "BaseBdev1", 00:14:45.753 "aliases": [ 00:14:45.753 "da23b1fe-8260-492d-85cd-2fbf5e99bbe8" 00:14:45.753 ], 00:14:45.753 "product_name": "Malloc disk", 00:14:45.753 "block_size": 512, 00:14:45.753 "num_blocks": 65536, 00:14:45.753 "uuid": "da23b1fe-8260-492d-85cd-2fbf5e99bbe8", 00:14:45.753 "assigned_rate_limits": { 00:14:45.753 "rw_ios_per_sec": 0, 00:14:45.753 "rw_mbytes_per_sec": 0, 00:14:45.753 "r_mbytes_per_sec": 0, 00:14:45.753 "w_mbytes_per_sec": 0 00:14:45.753 }, 00:14:45.753 "claimed": false, 00:14:45.753 "zoned": false, 00:14:45.753 "supported_io_types": { 00:14:45.753 "read": true, 00:14:45.753 "write": true, 00:14:45.753 "unmap": true, 00:14:45.753 "write_zeroes": true, 00:14:45.753 "flush": true, 00:14:45.753 "reset": true, 00:14:45.753 "compare": false, 00:14:45.753 "compare_and_write": false, 00:14:45.753 "abort": true, 00:14:45.753 "nvme_admin": false, 00:14:45.753 "nvme_io": false 00:14:45.753 }, 00:14:45.753 "memory_domains": [ 00:14:45.753 { 00:14:45.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.753 "dma_device_type": 2 00:14:45.753 } 00:14:45.753 ], 00:14:45.753 "driver_specific": {} 00:14:45.753 } 00:14:45.753 ] 00:14:45.753 21:37:06 -- common/autotest_common.sh@905 -- # return 0 00:14:45.753 21:37:06 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:46.012 [2024-12-06 21:37:06.307787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:46.012 [2024-12-06 21:37:06.309933] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:46.012 [2024-12-06 21:37:06.309988] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.012 21:37:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.270 21:37:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.270 "name": "Existed_Raid", 00:14:46.270 "uuid": "cb194cca-79ce-41cd-8da5-22fae4449e1e", 00:14:46.270 "strip_size_kb": 64, 00:14:46.270 "state": "configuring", 00:14:46.270 "raid_level": "concat", 00:14:46.270 "superblock": true, 00:14:46.270 "num_base_bdevs": 2, 00:14:46.270 "num_base_bdevs_discovered": 1, 00:14:46.270 "num_base_bdevs_operational": 2, 00:14:46.270 "base_bdevs_list": [ 00:14:46.270 { 00:14:46.270 "name": "BaseBdev1", 00:14:46.270 "uuid": "da23b1fe-8260-492d-85cd-2fbf5e99bbe8", 00:14:46.270 "is_configured": true, 00:14:46.270 "data_offset": 2048, 00:14:46.270 "data_size": 63488 00:14:46.270 }, 00:14:46.270 { 00:14:46.270 "name": "BaseBdev2", 00:14:46.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.270 "is_configured": false, 00:14:46.270 "data_offset": 0, 00:14:46.270 "data_size": 0 00:14:46.270 } 00:14:46.270 ] 00:14:46.270 }' 00:14:46.270 21:37:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.270 21:37:06 -- common/autotest_common.sh@10 -- # set +x 00:14:46.528 21:37:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:46.786 [2024-12-06 21:37:07.197690] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.786 [2024-12-06 21:37:07.197945] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:14:46.786 [2024-12-06 21:37:07.197964] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.786 [2024-12-06 21:37:07.198094] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:46.786 [2024-12-06 21:37:07.198470] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:14:46.786 [2024-12-06 21:37:07.198503] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:14:46.786 [2024-12-06 21:37:07.198666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.786 BaseBdev2 00:14:46.786 21:37:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:46.786 21:37:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:46.786 21:37:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:46.786 21:37:07 -- common/autotest_common.sh@899 -- # local i 00:14:46.786 21:37:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:46.786 21:37:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:46.786 21:37:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.043 21:37:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:47.301 [ 00:14:47.301 { 00:14:47.301 "name": "BaseBdev2", 00:14:47.301 "aliases": [ 00:14:47.301 "090b3ac2-bbaa-47ce-a81a-990d8f04ec09" 00:14:47.301 ], 00:14:47.301 "product_name": "Malloc disk", 00:14:47.301 "block_size": 512, 00:14:47.301 "num_blocks": 65536, 00:14:47.301 "uuid": "090b3ac2-bbaa-47ce-a81a-990d8f04ec09", 00:14:47.301 "assigned_rate_limits": { 00:14:47.301 "rw_ios_per_sec": 0, 00:14:47.301 "rw_mbytes_per_sec": 0, 00:14:47.301 "r_mbytes_per_sec": 0, 00:14:47.301 "w_mbytes_per_sec": 0 00:14:47.301 }, 00:14:47.301 "claimed": true, 00:14:47.301 "claim_type": "exclusive_write", 00:14:47.301 "zoned": false, 00:14:47.301 "supported_io_types": { 00:14:47.301 "read": true, 00:14:47.301 "write": true, 00:14:47.301 "unmap": true, 00:14:47.301 "write_zeroes": true, 00:14:47.301 "flush": true, 00:14:47.301 "reset": true, 00:14:47.301 "compare": false, 00:14:47.301 "compare_and_write": false, 00:14:47.301 "abort": true, 00:14:47.301 "nvme_admin": false, 00:14:47.301 "nvme_io": false 00:14:47.301 }, 00:14:47.301 "memory_domains": [ 00:14:47.301 { 00:14:47.301 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:47.301 "dma_device_type": 2 00:14:47.301 } 00:14:47.301 ], 00:14:47.301 "driver_specific": {} 00:14:47.301 } 00:14:47.301 ] 00:14:47.301 21:37:07 -- common/autotest_common.sh@905 -- # return 0 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.301 21:37:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.559 21:37:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.559 "name": "Existed_Raid", 00:14:47.559 "uuid": "cb194cca-79ce-41cd-8da5-22fae4449e1e", 00:14:47.559 "strip_size_kb": 64, 00:14:47.559 "state": "online", 00:14:47.559 "raid_level": "concat", 00:14:47.559 "superblock": true, 00:14:47.559 "num_base_bdevs": 2, 00:14:47.559 "num_base_bdevs_discovered": 2, 00:14:47.559 "num_base_bdevs_operational": 2, 00:14:47.559 "base_bdevs_list": [ 00:14:47.559 { 00:14:47.559 "name": "BaseBdev1", 00:14:47.559 "uuid": "da23b1fe-8260-492d-85cd-2fbf5e99bbe8", 00:14:47.559 "is_configured": true, 00:14:47.560 "data_offset": 2048, 00:14:47.560 "data_size": 63488 00:14:47.560 }, 00:14:47.560 { 00:14:47.560 "name": "BaseBdev2", 00:14:47.560 "uuid": "090b3ac2-bbaa-47ce-a81a-990d8f04ec09", 00:14:47.560 "is_configured": true, 00:14:47.560 "data_offset": 2048, 00:14:47.560 "data_size": 63488 00:14:47.560 } 00:14:47.560 ] 00:14:47.560 }' 00:14:47.560 21:37:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.560 21:37:07 -- common/autotest_common.sh@10 -- # set +x 00:14:48.124 21:37:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:48.124 [2024-12-06 21:37:08.542336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.125 [2024-12-06 21:37:08.542379] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.125 [2024-12-06 21:37:08.542488] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:48.411 "name": "Existed_Raid", 00:14:48.411 "uuid": "cb194cca-79ce-41cd-8da5-22fae4449e1e", 00:14:48.411 "strip_size_kb": 64, 00:14:48.411 "state": "offline", 00:14:48.411 "raid_level": "concat", 00:14:48.411 "superblock": true, 00:14:48.411 "num_base_bdevs": 2, 00:14:48.411 "num_base_bdevs_discovered": 1, 00:14:48.411 "num_base_bdevs_operational": 1, 00:14:48.411 "base_bdevs_list": [ 00:14:48.411 { 00:14:48.411 "name": null, 00:14:48.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.411 "is_configured": false, 00:14:48.411 "data_offset": 2048, 00:14:48.411 "data_size": 63488 00:14:48.411 }, 00:14:48.411 { 00:14:48.411 "name": "BaseBdev2", 00:14:48.411 "uuid": "090b3ac2-bbaa-47ce-a81a-990d8f04ec09", 00:14:48.411 "is_configured": true, 00:14:48.411 "data_offset": 2048, 00:14:48.411 "data_size": 63488 00:14:48.411 } 00:14:48.411 ] 00:14:48.411 }' 00:14:48.411 21:37:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:48.411 21:37:08 -- common/autotest_common.sh@10 -- # set +x 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:48.976 21:37:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:49.233 [2024-12-06 21:37:09.674745] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:49.233 [2024-12-06 21:37:09.674841] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:14:49.490 21:37:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:49.490 21:37:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:49.490 21:37:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.490 21:37:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:49.748 21:37:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:49.748 21:37:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:49.748 21:37:09 -- bdev/bdev_raid.sh@287 -- # killprocess 69798 00:14:49.748 21:37:09 -- common/autotest_common.sh@936 -- # '[' -z 69798 ']' 00:14:49.748 21:37:09 -- common/autotest_common.sh@940 -- # kill -0 69798 00:14:49.748 21:37:09 -- common/autotest_common.sh@941 -- # uname 00:14:49.748 21:37:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:49.748 21:37:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69798 00:14:49.748 21:37:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:49.748 killing process with pid 69798 00:14:49.748 21:37:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:49.748 21:37:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69798' 00:14:49.748 21:37:10 -- common/autotest_common.sh@955 -- # kill 69798 00:14:49.748 21:37:10 -- common/autotest_common.sh@960 -- # wait 69798 00:14:49.748 [2024-12-06 21:37:10.031609] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.748 [2024-12-06 21:37:10.031750] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.681 21:37:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:50.681 00:14:50.681 real 0m9.865s 00:14:50.681 user 0m16.234s 00:14:50.681 sys 0m1.391s 00:14:50.681 21:37:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:50.681 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.681 ************************************ 00:14:50.681 END TEST raid_state_function_test_sb 00:14:50.681 ************************************ 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:50.940 21:37:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:14:50.940 21:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:50.940 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.940 ************************************ 00:14:50.940 START TEST raid_superblock_test 00:14:50.940 ************************************ 00:14:50.940 21:37:11 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 2 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@357 -- # raid_pid=70099 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70099 /var/tmp/spdk-raid.sock 00:14:50.940 21:37:11 -- common/autotest_common.sh@829 -- # '[' -z 70099 ']' 00:14:50.940 21:37:11 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:50.940 21:37:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:50.940 21:37:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:50.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:50.940 21:37:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:50.940 21:37:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:50.940 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:14:50.940 [2024-12-06 21:37:11.290852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:50.940 [2024-12-06 21:37:11.291034] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70099 ] 00:14:51.198 [2024-12-06 21:37:11.467088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.456 [2024-12-06 21:37:11.716714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.456 [2024-12-06 21:37:11.912921] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:52.022 21:37:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:52.022 21:37:12 -- common/autotest_common.sh@862 -- # return 0 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:52.022 malloc1 00:14:52.022 21:37:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.281 [2024-12-06 21:37:12.694274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.281 [2024-12-06 21:37:12.694392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.281 [2024-12-06 21:37:12.694431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:14:52.281 [2024-12-06 21:37:12.694446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.281 [2024-12-06 21:37:12.697094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.281 [2024-12-06 21:37:12.697153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.281 pt1 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:52.281 21:37:12 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:52.540 malloc2 00:14:52.540 21:37:12 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.799 [2024-12-06 21:37:13.204729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.799 [2024-12-06 21:37:13.204836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.799 [2024-12-06 21:37:13.204871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:14:52.799 [2024-12-06 21:37:13.204885] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.799 [2024-12-06 21:37:13.207615] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.799 [2024-12-06 21:37:13.207674] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.799 pt2 00:14:52.799 21:37:13 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:52.799 21:37:13 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:52.799 21:37:13 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:53.057 [2024-12-06 21:37:13.416848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:53.057 [2024-12-06 21:37:13.418944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:53.057 [2024-12-06 21:37:13.419193] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:14:53.057 [2024-12-06 21:37:13.419211] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:53.057 [2024-12-06 21:37:13.419377] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:14:53.057 [2024-12-06 21:37:13.419808] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:14:53.057 [2024-12-06 21:37:13.419846] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:14:53.057 [2024-12-06 21:37:13.420054] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.057 21:37:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.315 21:37:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.315 "name": "raid_bdev1", 00:14:53.315 "uuid": "b1041f0b-4240-4502-8996-8bdf0823e1a6", 00:14:53.315 "strip_size_kb": 64, 00:14:53.315 "state": "online", 00:14:53.315 "raid_level": "concat", 00:14:53.315 "superblock": true, 00:14:53.315 "num_base_bdevs": 2, 00:14:53.315 "num_base_bdevs_discovered": 2, 00:14:53.315 "num_base_bdevs_operational": 2, 00:14:53.315 "base_bdevs_list": [ 00:14:53.315 { 00:14:53.315 "name": "pt1", 00:14:53.315 "uuid": "41dc523d-e8af-5129-bbe0-9e0b8da37dc6", 00:14:53.315 "is_configured": true, 00:14:53.315 "data_offset": 2048, 00:14:53.315 "data_size": 63488 00:14:53.315 }, 00:14:53.315 { 00:14:53.315 "name": "pt2", 00:14:53.315 "uuid": "df53bde2-936b-5bf0-95a3-4a7bbac03ce1", 00:14:53.315 "is_configured": true, 00:14:53.315 "data_offset": 2048, 00:14:53.315 "data_size": 63488 00:14:53.315 } 00:14:53.315 ] 00:14:53.315 }' 00:14:53.315 21:37:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.315 21:37:13 -- common/autotest_common.sh@10 -- # set +x 00:14:53.573 21:37:13 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:53.574 21:37:13 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:53.832 [2024-12-06 21:37:14.173141] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.832 21:37:14 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b1041f0b-4240-4502-8996-8bdf0823e1a6 00:14:53.832 21:37:14 -- bdev/bdev_raid.sh@380 -- # '[' -z b1041f0b-4240-4502-8996-8bdf0823e1a6 ']' 00:14:53.832 21:37:14 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:54.091 [2024-12-06 21:37:14.421099] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.091 [2024-12-06 21:37:14.421142] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.091 [2024-12-06 21:37:14.421230] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.091 [2024-12-06 21:37:14.421295] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.091 [2024-12-06 21:37:14.421310] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:14:54.091 21:37:14 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.091 21:37:14 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:54.350 21:37:14 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:54.350 21:37:14 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:54.350 21:37:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.350 21:37:14 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:54.609 21:37:14 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:54.609 21:37:14 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:54.609 21:37:15 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:54.609 21:37:15 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:55.173 21:37:15 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:55.173 21:37:15 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:55.173 21:37:15 -- common/autotest_common.sh@650 -- # local es=0 00:14:55.173 21:37:15 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:55.173 21:37:15 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.173 21:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.173 21:37:15 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.173 21:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.173 21:37:15 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.173 21:37:15 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:55.173 21:37:15 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:55.173 21:37:15 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:55.173 21:37:15 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:55.173 [2024-12-06 21:37:15.629422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:55.173 [2024-12-06 21:37:15.631581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:55.173 [2024-12-06 21:37:15.631697] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:55.173 [2024-12-06 21:37:15.631784] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:55.173 [2024-12-06 21:37:15.631812] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.173 [2024-12-06 21:37:15.631825] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:14:55.173 request: 00:14:55.173 { 00:14:55.173 "name": "raid_bdev1", 00:14:55.173 "raid_level": "concat", 00:14:55.173 "base_bdevs": [ 00:14:55.173 "malloc1", 00:14:55.173 "malloc2" 00:14:55.173 ], 00:14:55.173 "superblock": false, 00:14:55.173 "strip_size_kb": 64, 00:14:55.173 "method": "bdev_raid_create", 00:14:55.173 "req_id": 1 00:14:55.173 } 00:14:55.173 Got JSON-RPC error response 00:14:55.173 response: 00:14:55.173 { 00:14:55.173 "code": -17, 00:14:55.173 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:55.173 } 00:14:55.173 21:37:15 -- common/autotest_common.sh@653 -- # es=1 00:14:55.173 21:37:15 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:55.173 21:37:15 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:55.173 21:37:15 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:55.173 21:37:15 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:55.173 21:37:15 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.430 21:37:15 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:55.430 21:37:15 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:55.430 21:37:15 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:55.688 [2024-12-06 21:37:16.081418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:55.688 [2024-12-06 21:37:16.081489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:55.688 [2024-12-06 21:37:16.081520] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:14:55.688 [2024-12-06 21:37:16.081533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:55.688 pt1 00:14:55.688 [2024-12-06 21:37:16.083900] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:55.688 [2024-12-06 21:37:16.083938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:55.688 [2024-12-06 21:37:16.084064] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:55.688 [2024-12-06 21:37:16.084124] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.688 21:37:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.945 21:37:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:55.945 "name": "raid_bdev1", 00:14:55.945 "uuid": "b1041f0b-4240-4502-8996-8bdf0823e1a6", 00:14:55.945 "strip_size_kb": 64, 00:14:55.945 "state": "configuring", 00:14:55.945 "raid_level": "concat", 00:14:55.945 "superblock": true, 00:14:55.945 "num_base_bdevs": 2, 00:14:55.945 "num_base_bdevs_discovered": 1, 00:14:55.945 "num_base_bdevs_operational": 2, 00:14:55.945 "base_bdevs_list": [ 00:14:55.945 { 00:14:55.945 "name": "pt1", 00:14:55.945 "uuid": "41dc523d-e8af-5129-bbe0-9e0b8da37dc6", 00:14:55.945 "is_configured": true, 00:14:55.945 "data_offset": 2048, 00:14:55.945 "data_size": 63488 00:14:55.945 }, 00:14:55.945 { 00:14:55.945 "name": null, 00:14:55.945 "uuid": "df53bde2-936b-5bf0-95a3-4a7bbac03ce1", 00:14:55.945 "is_configured": false, 00:14:55.945 "data_offset": 2048, 00:14:55.945 "data_size": 63488 00:14:55.945 } 00:14:55.945 ] 00:14:55.945 }' 00:14:55.945 21:37:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:55.945 21:37:16 -- common/autotest_common.sh@10 -- # set +x 00:14:56.262 21:37:16 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:56.262 21:37:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:56.262 21:37:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:56.262 21:37:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:56.537 [2024-12-06 21:37:16.945808] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:56.537 [2024-12-06 21:37:16.945911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.537 [2024-12-06 21:37:16.945952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:14:56.537 [2024-12-06 21:37:16.945966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.537 [2024-12-06 21:37:16.946459] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.537 [2024-12-06 21:37:16.946525] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:56.537 [2024-12-06 21:37:16.946664] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:56.537 [2024-12-06 21:37:16.946693] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:56.537 [2024-12-06 21:37:16.946838] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:14:56.537 [2024-12-06 21:37:16.946853] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:56.537 [2024-12-06 21:37:16.946988] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:14:56.537 [2024-12-06 21:37:16.947453] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:14:56.537 [2024-12-06 21:37:16.947481] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:14:56.537 [2024-12-06 21:37:16.947661] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.537 pt2 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.537 21:37:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.794 21:37:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.794 "name": "raid_bdev1", 00:14:56.794 "uuid": "b1041f0b-4240-4502-8996-8bdf0823e1a6", 00:14:56.794 "strip_size_kb": 64, 00:14:56.794 "state": "online", 00:14:56.794 "raid_level": "concat", 00:14:56.794 "superblock": true, 00:14:56.794 "num_base_bdevs": 2, 00:14:56.794 "num_base_bdevs_discovered": 2, 00:14:56.794 "num_base_bdevs_operational": 2, 00:14:56.794 "base_bdevs_list": [ 00:14:56.794 { 00:14:56.794 "name": "pt1", 00:14:56.794 "uuid": "41dc523d-e8af-5129-bbe0-9e0b8da37dc6", 00:14:56.794 "is_configured": true, 00:14:56.794 "data_offset": 2048, 00:14:56.794 "data_size": 63488 00:14:56.794 }, 00:14:56.794 { 00:14:56.794 "name": "pt2", 00:14:56.794 "uuid": "df53bde2-936b-5bf0-95a3-4a7bbac03ce1", 00:14:56.794 "is_configured": true, 00:14:56.794 "data_offset": 2048, 00:14:56.794 "data_size": 63488 00:14:56.794 } 00:14:56.794 ] 00:14:56.794 }' 00:14:56.794 21:37:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.794 21:37:17 -- common/autotest_common.sh@10 -- # set +x 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:57.359 [2024-12-06 21:37:17.798224] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@430 -- # '[' b1041f0b-4240-4502-8996-8bdf0823e1a6 '!=' b1041f0b-4240-4502-8996-8bdf0823e1a6 ']' 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:57.359 21:37:17 -- bdev/bdev_raid.sh@511 -- # killprocess 70099 00:14:57.359 21:37:17 -- common/autotest_common.sh@936 -- # '[' -z 70099 ']' 00:14:57.359 21:37:17 -- common/autotest_common.sh@940 -- # kill -0 70099 00:14:57.359 21:37:17 -- common/autotest_common.sh@941 -- # uname 00:14:57.359 21:37:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:57.359 21:37:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70099 00:14:57.359 killing process with pid 70099 00:14:57.359 21:37:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:57.360 21:37:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:57.360 21:37:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70099' 00:14:57.360 21:37:17 -- common/autotest_common.sh@955 -- # kill 70099 00:14:57.360 [2024-12-06 21:37:17.852490] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:57.360 [2024-12-06 21:37:17.852598] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:57.360 21:37:17 -- common/autotest_common.sh@960 -- # wait 70099 00:14:57.360 [2024-12-06 21:37:17.852659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:57.360 [2024-12-06 21:37:17.852681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:14:57.618 [2024-12-06 21:37:18.005485] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:58.993 00:14:58.993 real 0m7.840s 00:14:58.993 user 0m12.666s 00:14:58.993 sys 0m1.081s 00:14:58.993 21:37:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:58.993 ************************************ 00:14:58.993 END TEST raid_superblock_test 00:14:58.993 21:37:19 -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 ************************************ 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:58.993 21:37:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:58.993 21:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:58.993 21:37:19 -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 ************************************ 00:14:58.993 START TEST raid_state_function_test 00:14:58.993 ************************************ 00:14:58.993 21:37:19 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 false 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:58.993 Process raid pid: 70328 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=70328 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70328' 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70328 /var/tmp/spdk-raid.sock 00:14:58.993 21:37:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:58.993 21:37:19 -- common/autotest_common.sh@829 -- # '[' -z 70328 ']' 00:14:58.993 21:37:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:58.993 21:37:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:58.993 21:37:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:58.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:58.993 21:37:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:58.993 21:37:19 -- common/autotest_common.sh@10 -- # set +x 00:14:58.993 [2024-12-06 21:37:19.182135] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:58.993 [2024-12-06 21:37:19.182495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:58.993 [2024-12-06 21:37:19.353987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.253 [2024-12-06 21:37:19.572545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.253 [2024-12-06 21:37:19.748240] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:59.821 21:37:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:59.821 21:37:20 -- common/autotest_common.sh@862 -- # return 0 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:59.821 [2024-12-06 21:37:20.291142] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.821 [2024-12-06 21:37:20.291204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.821 [2024-12-06 21:37:20.291220] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:59.821 [2024-12-06 21:37:20.291235] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.821 21:37:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.080 21:37:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.080 "name": "Existed_Raid", 00:15:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.080 "strip_size_kb": 0, 00:15:00.080 "state": "configuring", 00:15:00.080 "raid_level": "raid1", 00:15:00.080 "superblock": false, 00:15:00.080 "num_base_bdevs": 2, 00:15:00.080 "num_base_bdevs_discovered": 0, 00:15:00.080 "num_base_bdevs_operational": 2, 00:15:00.080 "base_bdevs_list": [ 00:15:00.080 { 00:15:00.080 "name": "BaseBdev1", 00:15:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.080 "is_configured": false, 00:15:00.080 "data_offset": 0, 00:15:00.080 "data_size": 0 00:15:00.080 }, 00:15:00.080 { 00:15:00.080 "name": "BaseBdev2", 00:15:00.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.080 "is_configured": false, 00:15:00.080 "data_offset": 0, 00:15:00.080 "data_size": 0 00:15:00.080 } 00:15:00.080 ] 00:15:00.080 }' 00:15:00.080 21:37:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.080 21:37:20 -- common/autotest_common.sh@10 -- # set +x 00:15:00.339 21:37:20 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:00.598 [2024-12-06 21:37:21.007215] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:00.598 [2024-12-06 21:37:21.007266] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:00.598 21:37:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:00.856 [2024-12-06 21:37:21.215303] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:00.856 [2024-12-06 21:37:21.215378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:00.857 [2024-12-06 21:37:21.215400] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:00.857 [2024-12-06 21:37:21.215415] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:00.857 21:37:21 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:01.115 [2024-12-06 21:37:21.483881] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.115 BaseBdev1 00:15:01.115 21:37:21 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:01.115 21:37:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:01.115 21:37:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:01.115 21:37:21 -- common/autotest_common.sh@899 -- # local i 00:15:01.115 21:37:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:01.115 21:37:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:01.115 21:37:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:01.374 21:37:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:01.632 [ 00:15:01.632 { 00:15:01.632 "name": "BaseBdev1", 00:15:01.632 "aliases": [ 00:15:01.632 "9ddbb4bb-4599-4b94-8723-ca83d03f796e" 00:15:01.632 ], 00:15:01.632 "product_name": "Malloc disk", 00:15:01.632 "block_size": 512, 00:15:01.632 "num_blocks": 65536, 00:15:01.632 "uuid": "9ddbb4bb-4599-4b94-8723-ca83d03f796e", 00:15:01.632 "assigned_rate_limits": { 00:15:01.632 "rw_ios_per_sec": 0, 00:15:01.632 "rw_mbytes_per_sec": 0, 00:15:01.632 "r_mbytes_per_sec": 0, 00:15:01.632 "w_mbytes_per_sec": 0 00:15:01.632 }, 00:15:01.632 "claimed": true, 00:15:01.632 "claim_type": "exclusive_write", 00:15:01.632 "zoned": false, 00:15:01.632 "supported_io_types": { 00:15:01.632 "read": true, 00:15:01.632 "write": true, 00:15:01.632 "unmap": true, 00:15:01.632 "write_zeroes": true, 00:15:01.632 "flush": true, 00:15:01.632 "reset": true, 00:15:01.632 "compare": false, 00:15:01.632 "compare_and_write": false, 00:15:01.632 "abort": true, 00:15:01.632 "nvme_admin": false, 00:15:01.632 "nvme_io": false 00:15:01.632 }, 00:15:01.632 "memory_domains": [ 00:15:01.632 { 00:15:01.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.632 "dma_device_type": 2 00:15:01.632 } 00:15:01.632 ], 00:15:01.632 "driver_specific": {} 00:15:01.632 } 00:15:01.632 ] 00:15:01.632 21:37:21 -- common/autotest_common.sh@905 -- # return 0 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.632 21:37:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.632 21:37:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.632 "name": "Existed_Raid", 00:15:01.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.632 "strip_size_kb": 0, 00:15:01.632 "state": "configuring", 00:15:01.632 "raid_level": "raid1", 00:15:01.632 "superblock": false, 00:15:01.632 "num_base_bdevs": 2, 00:15:01.633 "num_base_bdevs_discovered": 1, 00:15:01.633 "num_base_bdevs_operational": 2, 00:15:01.633 "base_bdevs_list": [ 00:15:01.633 { 00:15:01.633 "name": "BaseBdev1", 00:15:01.633 "uuid": "9ddbb4bb-4599-4b94-8723-ca83d03f796e", 00:15:01.633 "is_configured": true, 00:15:01.633 "data_offset": 0, 00:15:01.633 "data_size": 65536 00:15:01.633 }, 00:15:01.633 { 00:15:01.633 "name": "BaseBdev2", 00:15:01.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.633 "is_configured": false, 00:15:01.633 "data_offset": 0, 00:15:01.633 "data_size": 0 00:15:01.633 } 00:15:01.633 ] 00:15:01.633 }' 00:15:01.633 21:37:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.633 21:37:22 -- common/autotest_common.sh@10 -- # set +x 00:15:01.890 21:37:22 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.147 [2024-12-06 21:37:22.608363] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.147 [2024-12-06 21:37:22.608441] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:02.147 21:37:22 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:02.147 21:37:22 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:02.405 [2024-12-06 21:37:22.816438] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.405 [2024-12-06 21:37:22.818469] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.405 [2024-12-06 21:37:22.818535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.405 21:37:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.663 21:37:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:02.663 "name": "Existed_Raid", 00:15:02.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.663 "strip_size_kb": 0, 00:15:02.663 "state": "configuring", 00:15:02.663 "raid_level": "raid1", 00:15:02.663 "superblock": false, 00:15:02.663 "num_base_bdevs": 2, 00:15:02.663 "num_base_bdevs_discovered": 1, 00:15:02.663 "num_base_bdevs_operational": 2, 00:15:02.663 "base_bdevs_list": [ 00:15:02.663 { 00:15:02.663 "name": "BaseBdev1", 00:15:02.663 "uuid": "9ddbb4bb-4599-4b94-8723-ca83d03f796e", 00:15:02.663 "is_configured": true, 00:15:02.663 "data_offset": 0, 00:15:02.663 "data_size": 65536 00:15:02.663 }, 00:15:02.663 { 00:15:02.663 "name": "BaseBdev2", 00:15:02.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.663 "is_configured": false, 00:15:02.663 "data_offset": 0, 00:15:02.663 "data_size": 0 00:15:02.663 } 00:15:02.663 ] 00:15:02.663 }' 00:15:02.663 21:37:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:02.663 21:37:23 -- common/autotest_common.sh@10 -- # set +x 00:15:02.921 21:37:23 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:03.180 [2024-12-06 21:37:23.583010] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.180 [2024-12-06 21:37:23.583088] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:03.180 [2024-12-06 21:37:23.583101] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.180 [2024-12-06 21:37:23.583216] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:03.180 [2024-12-06 21:37:23.583660] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:03.180 [2024-12-06 21:37:23.583694] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:03.180 [2024-12-06 21:37:23.583976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.180 BaseBdev2 00:15:03.180 21:37:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:03.180 21:37:23 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:03.180 21:37:23 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:03.180 21:37:23 -- common/autotest_common.sh@899 -- # local i 00:15:03.180 21:37:23 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:03.180 21:37:23 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:03.180 21:37:23 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.439 21:37:23 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:03.699 [ 00:15:03.699 { 00:15:03.699 "name": "BaseBdev2", 00:15:03.699 "aliases": [ 00:15:03.699 "918a6a9f-f2f7-4e10-a85d-36aba137c032" 00:15:03.699 ], 00:15:03.699 "product_name": "Malloc disk", 00:15:03.699 "block_size": 512, 00:15:03.699 "num_blocks": 65536, 00:15:03.699 "uuid": "918a6a9f-f2f7-4e10-a85d-36aba137c032", 00:15:03.699 "assigned_rate_limits": { 00:15:03.699 "rw_ios_per_sec": 0, 00:15:03.699 "rw_mbytes_per_sec": 0, 00:15:03.699 "r_mbytes_per_sec": 0, 00:15:03.699 "w_mbytes_per_sec": 0 00:15:03.699 }, 00:15:03.699 "claimed": true, 00:15:03.699 "claim_type": "exclusive_write", 00:15:03.699 "zoned": false, 00:15:03.699 "supported_io_types": { 00:15:03.699 "read": true, 00:15:03.699 "write": true, 00:15:03.699 "unmap": true, 00:15:03.699 "write_zeroes": true, 00:15:03.699 "flush": true, 00:15:03.699 "reset": true, 00:15:03.699 "compare": false, 00:15:03.699 "compare_and_write": false, 00:15:03.699 "abort": true, 00:15:03.699 "nvme_admin": false, 00:15:03.699 "nvme_io": false 00:15:03.699 }, 00:15:03.699 "memory_domains": [ 00:15:03.699 { 00:15:03.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.699 "dma_device_type": 2 00:15:03.699 } 00:15:03.699 ], 00:15:03.699 "driver_specific": {} 00:15:03.699 } 00:15:03.699 ] 00:15:03.699 21:37:23 -- common/autotest_common.sh@905 -- # return 0 00:15:03.699 21:37:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:03.699 21:37:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.699 21:37:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.959 21:37:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.959 "name": "Existed_Raid", 00:15:03.959 "uuid": "79e83d61-021c-4cd7-9464-b1096150223a", 00:15:03.959 "strip_size_kb": 0, 00:15:03.959 "state": "online", 00:15:03.959 "raid_level": "raid1", 00:15:03.959 "superblock": false, 00:15:03.959 "num_base_bdevs": 2, 00:15:03.959 "num_base_bdevs_discovered": 2, 00:15:03.959 "num_base_bdevs_operational": 2, 00:15:03.959 "base_bdevs_list": [ 00:15:03.959 { 00:15:03.959 "name": "BaseBdev1", 00:15:03.959 "uuid": "9ddbb4bb-4599-4b94-8723-ca83d03f796e", 00:15:03.959 "is_configured": true, 00:15:03.959 "data_offset": 0, 00:15:03.959 "data_size": 65536 00:15:03.959 }, 00:15:03.959 { 00:15:03.959 "name": "BaseBdev2", 00:15:03.959 "uuid": "918a6a9f-f2f7-4e10-a85d-36aba137c032", 00:15:03.959 "is_configured": true, 00:15:03.959 "data_offset": 0, 00:15:03.959 "data_size": 65536 00:15:03.959 } 00:15:03.959 ] 00:15:03.959 }' 00:15:03.959 21:37:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.959 21:37:24 -- common/autotest_common.sh@10 -- # set +x 00:15:04.218 21:37:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:04.477 [2024-12-06 21:37:24.731450] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.477 21:37:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.736 21:37:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:04.736 "name": "Existed_Raid", 00:15:04.736 "uuid": "79e83d61-021c-4cd7-9464-b1096150223a", 00:15:04.736 "strip_size_kb": 0, 00:15:04.736 "state": "online", 00:15:04.736 "raid_level": "raid1", 00:15:04.736 "superblock": false, 00:15:04.736 "num_base_bdevs": 2, 00:15:04.736 "num_base_bdevs_discovered": 1, 00:15:04.736 "num_base_bdevs_operational": 1, 00:15:04.736 "base_bdevs_list": [ 00:15:04.736 { 00:15:04.736 "name": null, 00:15:04.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.736 "is_configured": false, 00:15:04.736 "data_offset": 0, 00:15:04.736 "data_size": 65536 00:15:04.736 }, 00:15:04.736 { 00:15:04.736 "name": "BaseBdev2", 00:15:04.736 "uuid": "918a6a9f-f2f7-4e10-a85d-36aba137c032", 00:15:04.736 "is_configured": true, 00:15:04.736 "data_offset": 0, 00:15:04.736 "data_size": 65536 00:15:04.736 } 00:15:04.736 ] 00:15:04.736 }' 00:15:04.736 21:37:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:04.736 21:37:25 -- common/autotest_common.sh@10 -- # set +x 00:15:04.994 21:37:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:04.994 21:37:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:04.994 21:37:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:04.994 21:37:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.250 21:37:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:05.250 21:37:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:05.250 21:37:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:05.507 [2024-12-06 21:37:25.779031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:05.507 [2024-12-06 21:37:25.779074] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:05.507 [2024-12-06 21:37:25.779169] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:05.507 [2024-12-06 21:37:25.855687] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:05.507 [2024-12-06 21:37:25.855731] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:05.507 21:37:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:05.507 21:37:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:05.507 21:37:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.507 21:37:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:05.765 21:37:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:05.765 21:37:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:05.765 21:37:26 -- bdev/bdev_raid.sh@287 -- # killprocess 70328 00:15:05.765 21:37:26 -- common/autotest_common.sh@936 -- # '[' -z 70328 ']' 00:15:05.765 21:37:26 -- common/autotest_common.sh@940 -- # kill -0 70328 00:15:05.765 21:37:26 -- common/autotest_common.sh@941 -- # uname 00:15:05.765 21:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:05.765 21:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70328 00:15:05.765 killing process with pid 70328 00:15:05.765 21:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:05.765 21:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:05.765 21:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70328' 00:15:05.765 21:37:26 -- common/autotest_common.sh@955 -- # kill 70328 00:15:05.765 21:37:26 -- common/autotest_common.sh@960 -- # wait 70328 00:15:05.765 [2024-12-06 21:37:26.106973] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:05.765 [2024-12-06 21:37:26.107151] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:06.697 ************************************ 00:15:06.697 END TEST raid_state_function_test 00:15:06.697 ************************************ 00:15:06.697 21:37:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:06.697 00:15:06.697 real 0m8.059s 00:15:06.697 user 0m13.017s 00:15:06.697 sys 0m1.196s 00:15:06.697 21:37:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:06.697 21:37:27 -- common/autotest_common.sh@10 -- # set +x 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:06.955 21:37:27 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:06.955 21:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.955 21:37:27 -- common/autotest_common.sh@10 -- # set +x 00:15:06.955 ************************************ 00:15:06.955 START TEST raid_state_function_test_sb 00:15:06.955 ************************************ 00:15:06.955 21:37:27 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 2 true 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=70609 00:15:06.955 Process raid pid: 70609 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 70609' 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:06.955 21:37:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 70609 /var/tmp/spdk-raid.sock 00:15:06.955 21:37:27 -- common/autotest_common.sh@829 -- # '[' -z 70609 ']' 00:15:06.955 21:37:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:06.955 21:37:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:06.955 21:37:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:06.955 21:37:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:06.955 21:37:27 -- common/autotest_common.sh@10 -- # set +x 00:15:06.955 [2024-12-06 21:37:27.283117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:06.955 [2024-12-06 21:37:27.283254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:06.955 [2024-12-06 21:37:27.438284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:07.213 [2024-12-06 21:37:27.618131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.471 [2024-12-06 21:37:27.784350] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:08.037 21:37:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.037 21:37:28 -- common/autotest_common.sh@862 -- # return 0 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:08.037 [2024-12-06 21:37:28.413475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:08.037 [2024-12-06 21:37:28.413563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:08.037 [2024-12-06 21:37:28.413596] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:08.037 [2024-12-06 21:37:28.413610] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.037 21:37:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.295 21:37:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.295 "name": "Existed_Raid", 00:15:08.295 "uuid": "93817f19-a14e-4a51-9784-7b2e5b8ed5b8", 00:15:08.295 "strip_size_kb": 0, 00:15:08.295 "state": "configuring", 00:15:08.295 "raid_level": "raid1", 00:15:08.295 "superblock": true, 00:15:08.295 "num_base_bdevs": 2, 00:15:08.295 "num_base_bdevs_discovered": 0, 00:15:08.295 "num_base_bdevs_operational": 2, 00:15:08.295 "base_bdevs_list": [ 00:15:08.295 { 00:15:08.295 "name": "BaseBdev1", 00:15:08.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.295 "is_configured": false, 00:15:08.295 "data_offset": 0, 00:15:08.295 "data_size": 0 00:15:08.295 }, 00:15:08.295 { 00:15:08.295 "name": "BaseBdev2", 00:15:08.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.295 "is_configured": false, 00:15:08.295 "data_offset": 0, 00:15:08.295 "data_size": 0 00:15:08.295 } 00:15:08.295 ] 00:15:08.295 }' 00:15:08.295 21:37:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.295 21:37:28 -- common/autotest_common.sh@10 -- # set +x 00:15:08.553 21:37:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:08.811 [2024-12-06 21:37:29.145616] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:08.811 [2024-12-06 21:37:29.145686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:08.811 21:37:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:09.069 [2024-12-06 21:37:29.365697] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.069 [2024-12-06 21:37:29.365772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.069 [2024-12-06 21:37:29.365812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:09.069 [2024-12-06 21:37:29.365828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:09.069 21:37:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:09.326 [2024-12-06 21:37:29.594276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:09.326 BaseBdev1 00:15:09.326 21:37:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:09.326 21:37:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:09.326 21:37:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:09.326 21:37:29 -- common/autotest_common.sh@899 -- # local i 00:15:09.326 21:37:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:09.326 21:37:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:09.326 21:37:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:09.584 21:37:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:09.584 [ 00:15:09.584 { 00:15:09.584 "name": "BaseBdev1", 00:15:09.584 "aliases": [ 00:15:09.584 "3cba5ce6-7948-462b-8c1d-eafd36dddf93" 00:15:09.584 ], 00:15:09.584 "product_name": "Malloc disk", 00:15:09.584 "block_size": 512, 00:15:09.584 "num_blocks": 65536, 00:15:09.584 "uuid": "3cba5ce6-7948-462b-8c1d-eafd36dddf93", 00:15:09.584 "assigned_rate_limits": { 00:15:09.584 "rw_ios_per_sec": 0, 00:15:09.584 "rw_mbytes_per_sec": 0, 00:15:09.584 "r_mbytes_per_sec": 0, 00:15:09.584 "w_mbytes_per_sec": 0 00:15:09.584 }, 00:15:09.584 "claimed": true, 00:15:09.584 "claim_type": "exclusive_write", 00:15:09.584 "zoned": false, 00:15:09.584 "supported_io_types": { 00:15:09.584 "read": true, 00:15:09.584 "write": true, 00:15:09.584 "unmap": true, 00:15:09.584 "write_zeroes": true, 00:15:09.584 "flush": true, 00:15:09.584 "reset": true, 00:15:09.584 "compare": false, 00:15:09.584 "compare_and_write": false, 00:15:09.584 "abort": true, 00:15:09.584 "nvme_admin": false, 00:15:09.584 "nvme_io": false 00:15:09.584 }, 00:15:09.584 "memory_domains": [ 00:15:09.584 { 00:15:09.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.584 "dma_device_type": 2 00:15:09.584 } 00:15:09.584 ], 00:15:09.584 "driver_specific": {} 00:15:09.584 } 00:15:09.584 ] 00:15:09.584 21:37:30 -- common/autotest_common.sh@905 -- # return 0 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.584 21:37:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.842 21:37:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.843 "name": "Existed_Raid", 00:15:09.843 "uuid": "efefbc29-143e-4489-92c6-d5e9cc8070dc", 00:15:09.843 "strip_size_kb": 0, 00:15:09.843 "state": "configuring", 00:15:09.843 "raid_level": "raid1", 00:15:09.843 "superblock": true, 00:15:09.843 "num_base_bdevs": 2, 00:15:09.843 "num_base_bdevs_discovered": 1, 00:15:09.843 "num_base_bdevs_operational": 2, 00:15:09.843 "base_bdevs_list": [ 00:15:09.843 { 00:15:09.843 "name": "BaseBdev1", 00:15:09.843 "uuid": "3cba5ce6-7948-462b-8c1d-eafd36dddf93", 00:15:09.843 "is_configured": true, 00:15:09.843 "data_offset": 2048, 00:15:09.843 "data_size": 63488 00:15:09.843 }, 00:15:09.843 { 00:15:09.843 "name": "BaseBdev2", 00:15:09.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.843 "is_configured": false, 00:15:09.843 "data_offset": 0, 00:15:09.843 "data_size": 0 00:15:09.843 } 00:15:09.843 ] 00:15:09.843 }' 00:15:09.843 21:37:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.843 21:37:30 -- common/autotest_common.sh@10 -- # set +x 00:15:10.101 21:37:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:10.360 [2024-12-06 21:37:30.763116] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:10.360 [2024-12-06 21:37:30.763190] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:10.360 21:37:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:10.360 21:37:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:10.619 21:37:31 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:10.877 BaseBdev1 00:15:10.877 21:37:31 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:10.877 21:37:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:10.877 21:37:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.877 21:37:31 -- common/autotest_common.sh@899 -- # local i 00:15:10.877 21:37:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.877 21:37:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.878 21:37:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:11.137 21:37:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.396 [ 00:15:11.396 { 00:15:11.396 "name": "BaseBdev1", 00:15:11.396 "aliases": [ 00:15:11.396 "6fe62503-9770-4eb9-966b-a7a90f644e57" 00:15:11.396 ], 00:15:11.396 "product_name": "Malloc disk", 00:15:11.396 "block_size": 512, 00:15:11.396 "num_blocks": 65536, 00:15:11.396 "uuid": "6fe62503-9770-4eb9-966b-a7a90f644e57", 00:15:11.396 "assigned_rate_limits": { 00:15:11.396 "rw_ios_per_sec": 0, 00:15:11.396 "rw_mbytes_per_sec": 0, 00:15:11.396 "r_mbytes_per_sec": 0, 00:15:11.396 "w_mbytes_per_sec": 0 00:15:11.396 }, 00:15:11.396 "claimed": false, 00:15:11.396 "zoned": false, 00:15:11.396 "supported_io_types": { 00:15:11.396 "read": true, 00:15:11.396 "write": true, 00:15:11.396 "unmap": true, 00:15:11.396 "write_zeroes": true, 00:15:11.396 "flush": true, 00:15:11.396 "reset": true, 00:15:11.396 "compare": false, 00:15:11.396 "compare_and_write": false, 00:15:11.396 "abort": true, 00:15:11.396 "nvme_admin": false, 00:15:11.396 "nvme_io": false 00:15:11.396 }, 00:15:11.396 "memory_domains": [ 00:15:11.396 { 00:15:11.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.396 "dma_device_type": 2 00:15:11.396 } 00:15:11.396 ], 00:15:11.396 "driver_specific": {} 00:15:11.396 } 00:15:11.396 ] 00:15:11.396 21:37:31 -- common/autotest_common.sh@905 -- # return 0 00:15:11.396 21:37:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:11.665 [2024-12-06 21:37:31.926328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.665 [2024-12-06 21:37:31.928443] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:11.665 [2024-12-06 21:37:31.928521] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:11.665 21:37:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:11.666 21:37:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.666 21:37:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.666 21:37:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:11.666 "name": "Existed_Raid", 00:15:11.666 "uuid": "bb4b0435-1a2e-4180-8b46-ef0d27c8d96a", 00:15:11.666 "strip_size_kb": 0, 00:15:11.666 "state": "configuring", 00:15:11.666 "raid_level": "raid1", 00:15:11.666 "superblock": true, 00:15:11.666 "num_base_bdevs": 2, 00:15:11.666 "num_base_bdevs_discovered": 1, 00:15:11.666 "num_base_bdevs_operational": 2, 00:15:11.666 "base_bdevs_list": [ 00:15:11.666 { 00:15:11.666 "name": "BaseBdev1", 00:15:11.666 "uuid": "6fe62503-9770-4eb9-966b-a7a90f644e57", 00:15:11.666 "is_configured": true, 00:15:11.666 "data_offset": 2048, 00:15:11.666 "data_size": 63488 00:15:11.666 }, 00:15:11.666 { 00:15:11.666 "name": "BaseBdev2", 00:15:11.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.666 "is_configured": false, 00:15:11.666 "data_offset": 0, 00:15:11.666 "data_size": 0 00:15:11.666 } 00:15:11.666 ] 00:15:11.666 }' 00:15:11.666 21:37:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:11.666 21:37:32 -- common/autotest_common.sh@10 -- # set +x 00:15:12.262 21:37:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:12.262 [2024-12-06 21:37:32.679687] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.262 [2024-12-06 21:37:32.679958] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:12.262 [2024-12-06 21:37:32.679977] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:12.262 [2024-12-06 21:37:32.680137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:12.262 [2024-12-06 21:37:32.680529] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:12.262 [2024-12-06 21:37:32.680563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:12.262 [2024-12-06 21:37:32.680725] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.262 BaseBdev2 00:15:12.262 21:37:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:12.262 21:37:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:12.262 21:37:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:12.262 21:37:32 -- common/autotest_common.sh@899 -- # local i 00:15:12.262 21:37:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:12.262 21:37:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:12.262 21:37:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:12.521 21:37:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:12.780 [ 00:15:12.780 { 00:15:12.780 "name": "BaseBdev2", 00:15:12.780 "aliases": [ 00:15:12.780 "38a0a61e-8c40-45da-926b-a205fcdcfd4a" 00:15:12.780 ], 00:15:12.780 "product_name": "Malloc disk", 00:15:12.780 "block_size": 512, 00:15:12.780 "num_blocks": 65536, 00:15:12.780 "uuid": "38a0a61e-8c40-45da-926b-a205fcdcfd4a", 00:15:12.780 "assigned_rate_limits": { 00:15:12.780 "rw_ios_per_sec": 0, 00:15:12.780 "rw_mbytes_per_sec": 0, 00:15:12.780 "r_mbytes_per_sec": 0, 00:15:12.780 "w_mbytes_per_sec": 0 00:15:12.780 }, 00:15:12.780 "claimed": true, 00:15:12.780 "claim_type": "exclusive_write", 00:15:12.780 "zoned": false, 00:15:12.780 "supported_io_types": { 00:15:12.780 "read": true, 00:15:12.780 "write": true, 00:15:12.780 "unmap": true, 00:15:12.780 "write_zeroes": true, 00:15:12.780 "flush": true, 00:15:12.780 "reset": true, 00:15:12.780 "compare": false, 00:15:12.780 "compare_and_write": false, 00:15:12.780 "abort": true, 00:15:12.780 "nvme_admin": false, 00:15:12.780 "nvme_io": false 00:15:12.780 }, 00:15:12.780 "memory_domains": [ 00:15:12.780 { 00:15:12.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:12.780 "dma_device_type": 2 00:15:12.780 } 00:15:12.780 ], 00:15:12.780 "driver_specific": {} 00:15:12.780 } 00:15:12.780 ] 00:15:12.780 21:37:33 -- common/autotest_common.sh@905 -- # return 0 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.780 21:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.039 21:37:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.039 "name": "Existed_Raid", 00:15:13.039 "uuid": "bb4b0435-1a2e-4180-8b46-ef0d27c8d96a", 00:15:13.039 "strip_size_kb": 0, 00:15:13.039 "state": "online", 00:15:13.039 "raid_level": "raid1", 00:15:13.039 "superblock": true, 00:15:13.039 "num_base_bdevs": 2, 00:15:13.039 "num_base_bdevs_discovered": 2, 00:15:13.039 "num_base_bdevs_operational": 2, 00:15:13.039 "base_bdevs_list": [ 00:15:13.039 { 00:15:13.039 "name": "BaseBdev1", 00:15:13.039 "uuid": "6fe62503-9770-4eb9-966b-a7a90f644e57", 00:15:13.039 "is_configured": true, 00:15:13.039 "data_offset": 2048, 00:15:13.039 "data_size": 63488 00:15:13.039 }, 00:15:13.039 { 00:15:13.039 "name": "BaseBdev2", 00:15:13.039 "uuid": "38a0a61e-8c40-45da-926b-a205fcdcfd4a", 00:15:13.039 "is_configured": true, 00:15:13.039 "data_offset": 2048, 00:15:13.039 "data_size": 63488 00:15:13.039 } 00:15:13.039 ] 00:15:13.039 }' 00:15:13.039 21:37:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.039 21:37:33 -- common/autotest_common.sh@10 -- # set +x 00:15:13.298 21:37:33 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:13.556 [2024-12-06 21:37:33.800204] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:13.556 21:37:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.557 21:37:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.815 21:37:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.815 "name": "Existed_Raid", 00:15:13.815 "uuid": "bb4b0435-1a2e-4180-8b46-ef0d27c8d96a", 00:15:13.815 "strip_size_kb": 0, 00:15:13.815 "state": "online", 00:15:13.815 "raid_level": "raid1", 00:15:13.815 "superblock": true, 00:15:13.815 "num_base_bdevs": 2, 00:15:13.815 "num_base_bdevs_discovered": 1, 00:15:13.815 "num_base_bdevs_operational": 1, 00:15:13.815 "base_bdevs_list": [ 00:15:13.815 { 00:15:13.815 "name": null, 00:15:13.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.815 "is_configured": false, 00:15:13.815 "data_offset": 2048, 00:15:13.815 "data_size": 63488 00:15:13.815 }, 00:15:13.815 { 00:15:13.815 "name": "BaseBdev2", 00:15:13.815 "uuid": "38a0a61e-8c40-45da-926b-a205fcdcfd4a", 00:15:13.815 "is_configured": true, 00:15:13.815 "data_offset": 2048, 00:15:13.815 "data_size": 63488 00:15:13.815 } 00:15:13.815 ] 00:15:13.815 }' 00:15:13.815 21:37:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.815 21:37:34 -- common/autotest_common.sh@10 -- # set +x 00:15:14.074 21:37:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:14.074 21:37:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:14.074 21:37:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.074 21:37:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:14.333 21:37:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:14.333 21:37:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:14.333 21:37:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:14.333 [2024-12-06 21:37:34.820461] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:14.333 [2024-12-06 21:37:34.820822] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.333 [2024-12-06 21:37:34.820904] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.592 [2024-12-06 21:37:34.892815] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.592 [2024-12-06 21:37:34.892857] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:14.592 21:37:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:14.592 21:37:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:14.592 21:37:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:14.592 21:37:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:14.850 21:37:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:14.850 21:37:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:14.850 21:37:35 -- bdev/bdev_raid.sh@287 -- # killprocess 70609 00:15:14.850 21:37:35 -- common/autotest_common.sh@936 -- # '[' -z 70609 ']' 00:15:14.850 21:37:35 -- common/autotest_common.sh@940 -- # kill -0 70609 00:15:14.850 21:37:35 -- common/autotest_common.sh@941 -- # uname 00:15:14.850 21:37:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:14.850 21:37:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70609 00:15:14.850 killing process with pid 70609 00:15:14.850 21:37:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:14.850 21:37:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:14.850 21:37:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70609' 00:15:14.850 21:37:35 -- common/autotest_common.sh@955 -- # kill 70609 00:15:14.850 [2024-12-06 21:37:35.195819] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.850 21:37:35 -- common/autotest_common.sh@960 -- # wait 70609 00:15:14.850 [2024-12-06 21:37:35.195960] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.782 21:37:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:15.782 00:15:15.782 real 0m9.037s 00:15:15.782 user 0m14.731s 00:15:15.782 sys 0m1.307s 00:15:15.782 21:37:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:15.782 21:37:36 -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 ************************************ 00:15:15.782 END TEST raid_state_function_test_sb 00:15:15.782 ************************************ 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:16.041 21:37:36 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:16.041 21:37:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:16.041 21:37:36 -- common/autotest_common.sh@10 -- # set +x 00:15:16.041 ************************************ 00:15:16.041 START TEST raid_superblock_test 00:15:16.041 ************************************ 00:15:16.041 21:37:36 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 2 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@357 -- # raid_pid=70898 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@358 -- # waitforlisten 70898 /var/tmp/spdk-raid.sock 00:15:16.041 21:37:36 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:16.041 21:37:36 -- common/autotest_common.sh@829 -- # '[' -z 70898 ']' 00:15:16.041 21:37:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:16.041 21:37:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.041 21:37:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:16.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:16.041 21:37:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.041 21:37:36 -- common/autotest_common.sh@10 -- # set +x 00:15:16.041 [2024-12-06 21:37:36.383149] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:16.041 [2024-12-06 21:37:36.383513] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70898 ] 00:15:16.300 [2024-12-06 21:37:36.557915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.300 [2024-12-06 21:37:36.793395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.560 [2024-12-06 21:37:36.961964] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.129 21:37:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.129 21:37:37 -- common/autotest_common.sh@862 -- # return 0 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:17.129 malloc1 00:15:17.129 21:37:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.388 [2024-12-06 21:37:37.750031] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.388 [2024-12-06 21:37:37.750134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.388 [2024-12-06 21:37:37.750175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:17.388 [2024-12-06 21:37:37.750190] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.388 [2024-12-06 21:37:37.752741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.388 [2024-12-06 21:37:37.752785] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.388 pt1 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:17.388 21:37:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:17.647 malloc2 00:15:17.647 21:37:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.907 [2024-12-06 21:37:38.216612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.907 [2024-12-06 21:37:38.216884] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.907 [2024-12-06 21:37:38.216961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:17.907 [2024-12-06 21:37:38.217080] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.907 [2024-12-06 21:37:38.219434] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.907 [2024-12-06 21:37:38.219659] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.907 pt2 00:15:17.907 21:37:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:17.907 21:37:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:17.907 21:37:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:18.166 [2024-12-06 21:37:38.464756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:18.166 [2024-12-06 21:37:38.466839] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.166 [2024-12-06 21:37:38.467039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007b80 00:15:18.166 [2024-12-06 21:37:38.467056] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:18.166 [2024-12-06 21:37:38.467174] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000055f0 00:15:18.166 [2024-12-06 21:37:38.467586] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007b80 00:15:18.166 [2024-12-06 21:37:38.467606] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000007b80 00:15:18.166 [2024-12-06 21:37:38.467809] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.166 21:37:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.426 21:37:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.426 "name": "raid_bdev1", 00:15:18.426 "uuid": "b44fcaa1-b682-4965-9c37-e9eed0df3dbf", 00:15:18.426 "strip_size_kb": 0, 00:15:18.426 "state": "online", 00:15:18.426 "raid_level": "raid1", 00:15:18.426 "superblock": true, 00:15:18.426 "num_base_bdevs": 2, 00:15:18.426 "num_base_bdevs_discovered": 2, 00:15:18.426 "num_base_bdevs_operational": 2, 00:15:18.426 "base_bdevs_list": [ 00:15:18.426 { 00:15:18.426 "name": "pt1", 00:15:18.426 "uuid": "b0ff18b0-bea1-5cc1-8cfa-7045172c9bda", 00:15:18.426 "is_configured": true, 00:15:18.426 "data_offset": 2048, 00:15:18.426 "data_size": 63488 00:15:18.426 }, 00:15:18.426 { 00:15:18.426 "name": "pt2", 00:15:18.426 "uuid": "0ee9e406-18f0-54e8-a9e8-2c230dfafed8", 00:15:18.426 "is_configured": true, 00:15:18.426 "data_offset": 2048, 00:15:18.426 "data_size": 63488 00:15:18.426 } 00:15:18.426 ] 00:15:18.426 }' 00:15:18.426 21:37:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.426 21:37:38 -- common/autotest_common.sh@10 -- # set +x 00:15:18.684 21:37:38 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:18.684 21:37:38 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:18.944 [2024-12-06 21:37:39.189093] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:18.944 21:37:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b44fcaa1-b682-4965-9c37-e9eed0df3dbf 00:15:18.944 21:37:39 -- bdev/bdev_raid.sh@380 -- # '[' -z b44fcaa1-b682-4965-9c37-e9eed0df3dbf ']' 00:15:18.944 21:37:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:18.944 [2024-12-06 21:37:39.392892] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.944 [2024-12-06 21:37:39.392925] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.944 [2024-12-06 21:37:39.393000] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.944 [2024-12-06 21:37:39.393066] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.944 [2024-12-06 21:37:39.393080] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007b80 name raid_bdev1, state offline 00:15:18.944 21:37:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:18.944 21:37:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:19.203 21:37:39 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:19.203 21:37:39 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:19.203 21:37:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.203 21:37:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:19.471 21:37:39 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.471 21:37:39 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:19.730 21:37:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:19.730 21:37:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:19.995 21:37:40 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:19.995 21:37:40 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.995 21:37:40 -- common/autotest_common.sh@650 -- # local es=0 00:15:19.995 21:37:40 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:19.995 21:37:40 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.995 21:37:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.995 21:37:40 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.996 21:37:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.996 21:37:40 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.996 21:37:40 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:19.996 21:37:40 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:19.996 21:37:40 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:19.996 21:37:40 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:20.253 [2024-12-06 21:37:40.497127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:20.253 [2024-12-06 21:37:40.499169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:20.253 [2024-12-06 21:37:40.499433] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:20.253 [2024-12-06 21:37:40.499665] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:20.253 [2024-12-06 21:37:40.499884] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.253 [2024-12-06 21:37:40.500136] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state configuring 00:15:20.253 request: 00:15:20.253 { 00:15:20.253 "name": "raid_bdev1", 00:15:20.253 "raid_level": "raid1", 00:15:20.253 "base_bdevs": [ 00:15:20.253 "malloc1", 00:15:20.253 "malloc2" 00:15:20.253 ], 00:15:20.253 "superblock": false, 00:15:20.253 "method": "bdev_raid_create", 00:15:20.253 "req_id": 1 00:15:20.253 } 00:15:20.253 Got JSON-RPC error response 00:15:20.253 response: 00:15:20.253 { 00:15:20.253 "code": -17, 00:15:20.253 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:20.253 } 00:15:20.253 21:37:40 -- common/autotest_common.sh@653 -- # es=1 00:15:20.253 21:37:40 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:20.253 21:37:40 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:20.253 21:37:40 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:20.253 21:37:40 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.253 21:37:40 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:20.253 21:37:40 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:20.253 21:37:40 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:20.253 21:37:40 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.512 [2024-12-06 21:37:40.905169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.512 [2024-12-06 21:37:40.905252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.512 [2024-12-06 21:37:40.905283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:15:20.512 [2024-12-06 21:37:40.905297] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.512 [2024-12-06 21:37:40.907692] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.512 [2024-12-06 21:37:40.907735] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.512 [2024-12-06 21:37:40.907835] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:20.512 [2024-12-06 21:37:40.907889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.512 pt1 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.512 21:37:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.513 21:37:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.771 21:37:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.771 "name": "raid_bdev1", 00:15:20.771 "uuid": "b44fcaa1-b682-4965-9c37-e9eed0df3dbf", 00:15:20.771 "strip_size_kb": 0, 00:15:20.771 "state": "configuring", 00:15:20.771 "raid_level": "raid1", 00:15:20.771 "superblock": true, 00:15:20.771 "num_base_bdevs": 2, 00:15:20.771 "num_base_bdevs_discovered": 1, 00:15:20.771 "num_base_bdevs_operational": 2, 00:15:20.771 "base_bdevs_list": [ 00:15:20.771 { 00:15:20.771 "name": "pt1", 00:15:20.771 "uuid": "b0ff18b0-bea1-5cc1-8cfa-7045172c9bda", 00:15:20.771 "is_configured": true, 00:15:20.771 "data_offset": 2048, 00:15:20.771 "data_size": 63488 00:15:20.771 }, 00:15:20.771 { 00:15:20.771 "name": null, 00:15:20.771 "uuid": "0ee9e406-18f0-54e8-a9e8-2c230dfafed8", 00:15:20.771 "is_configured": false, 00:15:20.771 "data_offset": 2048, 00:15:20.771 "data_size": 63488 00:15:20.771 } 00:15:20.771 ] 00:15:20.771 }' 00:15:20.771 21:37:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.771 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:15:21.030 21:37:41 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:21.030 21:37:41 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:21.030 21:37:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:21.030 21:37:41 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.289 [2024-12-06 21:37:41.725388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.289 [2024-12-06 21:37:41.725531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.289 [2024-12-06 21:37:41.725589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009080 00:15:21.289 [2024-12-06 21:37:41.725605] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.289 [2024-12-06 21:37:41.726125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.289 [2024-12-06 21:37:41.726157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.289 [2024-12-06 21:37:41.726289] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:21.289 [2024-12-06 21:37:41.726315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.289 [2024-12-06 21:37:41.726463] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:15:21.289 [2024-12-06 21:37:41.726477] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.289 [2024-12-06 21:37:41.726607] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:21.289 [2024-12-06 21:37:41.726938] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:15:21.289 [2024-12-06 21:37:41.726957] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:15:21.289 [2024-12-06 21:37:41.727096] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.289 pt2 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.289 21:37:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.547 21:37:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.547 "name": "raid_bdev1", 00:15:21.548 "uuid": "b44fcaa1-b682-4965-9c37-e9eed0df3dbf", 00:15:21.548 "strip_size_kb": 0, 00:15:21.548 "state": "online", 00:15:21.548 "raid_level": "raid1", 00:15:21.548 "superblock": true, 00:15:21.548 "num_base_bdevs": 2, 00:15:21.548 "num_base_bdevs_discovered": 2, 00:15:21.548 "num_base_bdevs_operational": 2, 00:15:21.548 "base_bdevs_list": [ 00:15:21.548 { 00:15:21.548 "name": "pt1", 00:15:21.548 "uuid": "b0ff18b0-bea1-5cc1-8cfa-7045172c9bda", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 2048, 00:15:21.548 "data_size": 63488 00:15:21.548 }, 00:15:21.548 { 00:15:21.548 "name": "pt2", 00:15:21.548 "uuid": "0ee9e406-18f0-54e8-a9e8-2c230dfafed8", 00:15:21.548 "is_configured": true, 00:15:21.548 "data_offset": 2048, 00:15:21.548 "data_size": 63488 00:15:21.548 } 00:15:21.548 ] 00:15:21.548 }' 00:15:21.548 21:37:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.548 21:37:41 -- common/autotest_common.sh@10 -- # set +x 00:15:21.806 21:37:42 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:21.806 21:37:42 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:22.064 [2024-12-06 21:37:42.433768] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.064 21:37:42 -- bdev/bdev_raid.sh@430 -- # '[' b44fcaa1-b682-4965-9c37-e9eed0df3dbf '!=' b44fcaa1-b682-4965-9c37-e9eed0df3dbf ']' 00:15:22.064 21:37:42 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:22.064 21:37:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:22.064 21:37:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:22.064 21:37:42 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:22.323 [2024-12-06 21:37:42.673653] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.323 21:37:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.581 21:37:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:22.581 "name": "raid_bdev1", 00:15:22.581 "uuid": "b44fcaa1-b682-4965-9c37-e9eed0df3dbf", 00:15:22.581 "strip_size_kb": 0, 00:15:22.581 "state": "online", 00:15:22.581 "raid_level": "raid1", 00:15:22.581 "superblock": true, 00:15:22.581 "num_base_bdevs": 2, 00:15:22.581 "num_base_bdevs_discovered": 1, 00:15:22.581 "num_base_bdevs_operational": 1, 00:15:22.581 "base_bdevs_list": [ 00:15:22.581 { 00:15:22.581 "name": null, 00:15:22.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.581 "is_configured": false, 00:15:22.581 "data_offset": 2048, 00:15:22.581 "data_size": 63488 00:15:22.581 }, 00:15:22.581 { 00:15:22.581 "name": "pt2", 00:15:22.581 "uuid": "0ee9e406-18f0-54e8-a9e8-2c230dfafed8", 00:15:22.581 "is_configured": true, 00:15:22.581 "data_offset": 2048, 00:15:22.581 "data_size": 63488 00:15:22.581 } 00:15:22.581 ] 00:15:22.581 }' 00:15:22.581 21:37:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:22.581 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:15:22.839 21:37:43 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:23.096 [2024-12-06 21:37:43.421849] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:23.096 [2024-12-06 21:37:43.421885] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:23.096 [2024-12-06 21:37:43.421967] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.096 [2024-12-06 21:37:43.422024] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.096 [2024-12-06 21:37:43.422041] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:15:23.096 21:37:43 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.096 21:37:43 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:23.355 21:37:43 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:23.355 21:37:43 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:23.355 21:37:43 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:23.355 21:37:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:23.355 21:37:43 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:23.613 21:37:43 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:23.613 [2024-12-06 21:37:44.097987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:23.613 [2024-12-06 21:37:44.098074] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.613 [2024-12-06 21:37:44.098104] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:23.613 [2024-12-06 21:37:44.098153] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.613 [2024-12-06 21:37:44.100768] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.613 [2024-12-06 21:37:44.100991] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:23.613 [2024-12-06 21:37:44.101105] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:23.613 [2024-12-06 21:37:44.101188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:23.613 [2024-12-06 21:37:44.101307] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:23.613 [2024-12-06 21:37:44.101345] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:23.613 [2024-12-06 21:37:44.101473] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:23.613 pt2 00:15:23.613 [2024-12-06 21:37:44.101857] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:23.613 [2024-12-06 21:37:44.101880] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:23.613 [2024-12-06 21:37:44.102105] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.871 "name": "raid_bdev1", 00:15:23.871 "uuid": "b44fcaa1-b682-4965-9c37-e9eed0df3dbf", 00:15:23.871 "strip_size_kb": 0, 00:15:23.871 "state": "online", 00:15:23.871 "raid_level": "raid1", 00:15:23.871 "superblock": true, 00:15:23.871 "num_base_bdevs": 2, 00:15:23.871 "num_base_bdevs_discovered": 1, 00:15:23.871 "num_base_bdevs_operational": 1, 00:15:23.871 "base_bdevs_list": [ 00:15:23.871 { 00:15:23.871 "name": null, 00:15:23.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.871 "is_configured": false, 00:15:23.871 "data_offset": 2048, 00:15:23.871 "data_size": 63488 00:15:23.871 }, 00:15:23.871 { 00:15:23.871 "name": "pt2", 00:15:23.871 "uuid": "0ee9e406-18f0-54e8-a9e8-2c230dfafed8", 00:15:23.871 "is_configured": true, 00:15:23.871 "data_offset": 2048, 00:15:23.871 "data_size": 63488 00:15:23.871 } 00:15:23.871 ] 00:15:23.871 }' 00:15:23.871 21:37:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.871 21:37:44 -- common/autotest_common.sh@10 -- # set +x 00:15:24.129 21:37:44 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:24.129 21:37:44 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:24.129 21:37:44 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:24.388 [2024-12-06 21:37:44.834434] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.388 21:37:44 -- bdev/bdev_raid.sh@506 -- # '[' b44fcaa1-b682-4965-9c37-e9eed0df3dbf '!=' b44fcaa1-b682-4965-9c37-e9eed0df3dbf ']' 00:15:24.388 21:37:44 -- bdev/bdev_raid.sh@511 -- # killprocess 70898 00:15:24.388 21:37:44 -- common/autotest_common.sh@936 -- # '[' -z 70898 ']' 00:15:24.388 21:37:44 -- common/autotest_common.sh@940 -- # kill -0 70898 00:15:24.388 21:37:44 -- common/autotest_common.sh@941 -- # uname 00:15:24.388 21:37:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:24.388 21:37:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70898 00:15:24.388 killing process with pid 70898 00:15:24.388 21:37:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:24.388 21:37:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:24.388 21:37:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70898' 00:15:24.388 21:37:44 -- common/autotest_common.sh@955 -- # kill 70898 00:15:24.388 [2024-12-06 21:37:44.879892] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:24.388 21:37:44 -- common/autotest_common.sh@960 -- # wait 70898 00:15:24.388 [2024-12-06 21:37:44.879974] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:24.388 [2024-12-06 21:37:44.880033] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:24.388 [2024-12-06 21:37:44.880075] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:24.646 [2024-12-06 21:37:45.024866] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:26.019 ************************************ 00:15:26.019 END TEST raid_superblock_test 00:15:26.019 ************************************ 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:26.019 00:15:26.019 real 0m9.760s 00:15:26.019 user 0m16.215s 00:15:26.019 sys 0m1.393s 00:15:26.019 21:37:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:26.019 21:37:46 -- common/autotest_common.sh@10 -- # set +x 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:26.019 21:37:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:26.019 21:37:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:26.019 21:37:46 -- common/autotest_common.sh@10 -- # set +x 00:15:26.019 ************************************ 00:15:26.019 START TEST raid_state_function_test 00:15:26.019 ************************************ 00:15:26.019 21:37:46 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 false 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=71219 00:15:26.019 Process raid pid: 71219 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71219' 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71219 /var/tmp/spdk-raid.sock 00:15:26.019 21:37:46 -- common/autotest_common.sh@829 -- # '[' -z 71219 ']' 00:15:26.019 21:37:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:26.019 21:37:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:26.019 21:37:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:26.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:26.019 21:37:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:26.019 21:37:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:26.019 21:37:46 -- common/autotest_common.sh@10 -- # set +x 00:15:26.019 [2024-12-06 21:37:46.192735] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:26.019 [2024-12-06 21:37:46.192905] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.019 [2024-12-06 21:37:46.363659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.279 [2024-12-06 21:37:46.539758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.279 [2024-12-06 21:37:46.709500] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.846 21:37:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.846 21:37:47 -- common/autotest_common.sh@862 -- # return 0 00:15:26.846 21:37:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:26.846 [2024-12-06 21:37:47.335922] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.846 [2024-12-06 21:37:47.336006] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.846 [2024-12-06 21:37:47.336039] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.846 [2024-12-06 21:37:47.336077] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.846 [2024-12-06 21:37:47.336087] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.846 [2024-12-06 21:37:47.336100] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.106 21:37:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.365 21:37:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.365 "name": "Existed_Raid", 00:15:27.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.365 "strip_size_kb": 64, 00:15:27.365 "state": "configuring", 00:15:27.365 "raid_level": "raid0", 00:15:27.365 "superblock": false, 00:15:27.365 "num_base_bdevs": 3, 00:15:27.365 "num_base_bdevs_discovered": 0, 00:15:27.365 "num_base_bdevs_operational": 3, 00:15:27.365 "base_bdevs_list": [ 00:15:27.365 { 00:15:27.365 "name": "BaseBdev1", 00:15:27.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.365 "is_configured": false, 00:15:27.365 "data_offset": 0, 00:15:27.365 "data_size": 0 00:15:27.365 }, 00:15:27.365 { 00:15:27.365 "name": "BaseBdev2", 00:15:27.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.365 "is_configured": false, 00:15:27.365 "data_offset": 0, 00:15:27.365 "data_size": 0 00:15:27.365 }, 00:15:27.365 { 00:15:27.365 "name": "BaseBdev3", 00:15:27.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.365 "is_configured": false, 00:15:27.365 "data_offset": 0, 00:15:27.365 "data_size": 0 00:15:27.365 } 00:15:27.365 ] 00:15:27.365 }' 00:15:27.365 21:37:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.365 21:37:47 -- common/autotest_common.sh@10 -- # set +x 00:15:27.664 21:37:47 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:27.952 [2024-12-06 21:37:48.160148] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.952 [2024-12-06 21:37:48.160211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:27.952 21:37:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:27.952 [2024-12-06 21:37:48.432260] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.952 [2024-12-06 21:37:48.432334] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.952 [2024-12-06 21:37:48.432348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.952 [2024-12-06 21:37:48.432365] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.952 [2024-12-06 21:37:48.432375] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.952 [2024-12-06 21:37:48.432388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.211 21:37:48 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.211 [2024-12-06 21:37:48.673738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.211 BaseBdev1 00:15:28.211 21:37:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:28.211 21:37:48 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:28.211 21:37:48 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.211 21:37:48 -- common/autotest_common.sh@899 -- # local i 00:15:28.211 21:37:48 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.211 21:37:48 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.211 21:37:48 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.470 21:37:48 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.730 [ 00:15:28.730 { 00:15:28.730 "name": "BaseBdev1", 00:15:28.730 "aliases": [ 00:15:28.730 "87defdb4-6ff3-4c68-9443-da9d7a06da4a" 00:15:28.730 ], 00:15:28.730 "product_name": "Malloc disk", 00:15:28.730 "block_size": 512, 00:15:28.730 "num_blocks": 65536, 00:15:28.730 "uuid": "87defdb4-6ff3-4c68-9443-da9d7a06da4a", 00:15:28.730 "assigned_rate_limits": { 00:15:28.730 "rw_ios_per_sec": 0, 00:15:28.730 "rw_mbytes_per_sec": 0, 00:15:28.730 "r_mbytes_per_sec": 0, 00:15:28.730 "w_mbytes_per_sec": 0 00:15:28.730 }, 00:15:28.730 "claimed": true, 00:15:28.730 "claim_type": "exclusive_write", 00:15:28.730 "zoned": false, 00:15:28.730 "supported_io_types": { 00:15:28.730 "read": true, 00:15:28.730 "write": true, 00:15:28.730 "unmap": true, 00:15:28.730 "write_zeroes": true, 00:15:28.730 "flush": true, 00:15:28.730 "reset": true, 00:15:28.730 "compare": false, 00:15:28.730 "compare_and_write": false, 00:15:28.730 "abort": true, 00:15:28.730 "nvme_admin": false, 00:15:28.730 "nvme_io": false 00:15:28.730 }, 00:15:28.730 "memory_domains": [ 00:15:28.730 { 00:15:28.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.730 "dma_device_type": 2 00:15:28.730 } 00:15:28.730 ], 00:15:28.730 "driver_specific": {} 00:15:28.730 } 00:15:28.730 ] 00:15:28.730 21:37:49 -- common/autotest_common.sh@905 -- # return 0 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.730 21:37:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.989 21:37:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.989 "name": "Existed_Raid", 00:15:28.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.989 "strip_size_kb": 64, 00:15:28.989 "state": "configuring", 00:15:28.989 "raid_level": "raid0", 00:15:28.989 "superblock": false, 00:15:28.989 "num_base_bdevs": 3, 00:15:28.989 "num_base_bdevs_discovered": 1, 00:15:28.989 "num_base_bdevs_operational": 3, 00:15:28.989 "base_bdevs_list": [ 00:15:28.989 { 00:15:28.989 "name": "BaseBdev1", 00:15:28.989 "uuid": "87defdb4-6ff3-4c68-9443-da9d7a06da4a", 00:15:28.989 "is_configured": true, 00:15:28.989 "data_offset": 0, 00:15:28.989 "data_size": 65536 00:15:28.989 }, 00:15:28.989 { 00:15:28.989 "name": "BaseBdev2", 00:15:28.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.989 "is_configured": false, 00:15:28.989 "data_offset": 0, 00:15:28.989 "data_size": 0 00:15:28.989 }, 00:15:28.989 { 00:15:28.989 "name": "BaseBdev3", 00:15:28.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.989 "is_configured": false, 00:15:28.989 "data_offset": 0, 00:15:28.989 "data_size": 0 00:15:28.989 } 00:15:28.989 ] 00:15:28.989 }' 00:15:28.989 21:37:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.989 21:37:49 -- common/autotest_common.sh@10 -- # set +x 00:15:29.248 21:37:49 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:29.508 [2024-12-06 21:37:49.842153] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:29.508 [2024-12-06 21:37:49.842246] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:29.508 21:37:49 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:29.508 21:37:49 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:29.765 [2024-12-06 21:37:50.090237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:29.765 [2024-12-06 21:37:50.092259] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:29.765 [2024-12-06 21:37:50.092341] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:29.765 [2024-12-06 21:37:50.092355] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:29.765 [2024-12-06 21:37:50.092369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.765 21:37:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.023 21:37:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.023 "name": "Existed_Raid", 00:15:30.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.023 "strip_size_kb": 64, 00:15:30.023 "state": "configuring", 00:15:30.023 "raid_level": "raid0", 00:15:30.023 "superblock": false, 00:15:30.023 "num_base_bdevs": 3, 00:15:30.023 "num_base_bdevs_discovered": 1, 00:15:30.023 "num_base_bdevs_operational": 3, 00:15:30.023 "base_bdevs_list": [ 00:15:30.023 { 00:15:30.023 "name": "BaseBdev1", 00:15:30.023 "uuid": "87defdb4-6ff3-4c68-9443-da9d7a06da4a", 00:15:30.023 "is_configured": true, 00:15:30.023 "data_offset": 0, 00:15:30.023 "data_size": 65536 00:15:30.023 }, 00:15:30.023 { 00:15:30.023 "name": "BaseBdev2", 00:15:30.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.023 "is_configured": false, 00:15:30.023 "data_offset": 0, 00:15:30.023 "data_size": 0 00:15:30.023 }, 00:15:30.023 { 00:15:30.023 "name": "BaseBdev3", 00:15:30.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.023 "is_configured": false, 00:15:30.023 "data_offset": 0, 00:15:30.023 "data_size": 0 00:15:30.023 } 00:15:30.023 ] 00:15:30.023 }' 00:15:30.023 21:37:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.023 21:37:50 -- common/autotest_common.sh@10 -- # set +x 00:15:30.281 21:37:50 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:30.539 [2024-12-06 21:37:50.838699] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.539 BaseBdev2 00:15:30.539 21:37:50 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:30.539 21:37:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:30.539 21:37:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.539 21:37:50 -- common/autotest_common.sh@899 -- # local i 00:15:30.539 21:37:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.539 21:37:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.539 21:37:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.797 21:37:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:31.066 [ 00:15:31.066 { 00:15:31.066 "name": "BaseBdev2", 00:15:31.066 "aliases": [ 00:15:31.066 "73f38524-e038-406d-9f2b-b54fc5ea03ec" 00:15:31.066 ], 00:15:31.066 "product_name": "Malloc disk", 00:15:31.066 "block_size": 512, 00:15:31.066 "num_blocks": 65536, 00:15:31.066 "uuid": "73f38524-e038-406d-9f2b-b54fc5ea03ec", 00:15:31.066 "assigned_rate_limits": { 00:15:31.066 "rw_ios_per_sec": 0, 00:15:31.066 "rw_mbytes_per_sec": 0, 00:15:31.066 "r_mbytes_per_sec": 0, 00:15:31.066 "w_mbytes_per_sec": 0 00:15:31.066 }, 00:15:31.066 "claimed": true, 00:15:31.066 "claim_type": "exclusive_write", 00:15:31.066 "zoned": false, 00:15:31.066 "supported_io_types": { 00:15:31.066 "read": true, 00:15:31.066 "write": true, 00:15:31.066 "unmap": true, 00:15:31.066 "write_zeroes": true, 00:15:31.066 "flush": true, 00:15:31.066 "reset": true, 00:15:31.066 "compare": false, 00:15:31.066 "compare_and_write": false, 00:15:31.066 "abort": true, 00:15:31.066 "nvme_admin": false, 00:15:31.066 "nvme_io": false 00:15:31.066 }, 00:15:31.066 "memory_domains": [ 00:15:31.066 { 00:15:31.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.066 "dma_device_type": 2 00:15:31.066 } 00:15:31.066 ], 00:15:31.066 "driver_specific": {} 00:15:31.066 } 00:15:31.066 ] 00:15:31.066 21:37:51 -- common/autotest_common.sh@905 -- # return 0 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.066 21:37:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.066 "name": "Existed_Raid", 00:15:31.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.066 "strip_size_kb": 64, 00:15:31.066 "state": "configuring", 00:15:31.066 "raid_level": "raid0", 00:15:31.066 "superblock": false, 00:15:31.066 "num_base_bdevs": 3, 00:15:31.066 "num_base_bdevs_discovered": 2, 00:15:31.066 "num_base_bdevs_operational": 3, 00:15:31.066 "base_bdevs_list": [ 00:15:31.066 { 00:15:31.066 "name": "BaseBdev1", 00:15:31.067 "uuid": "87defdb4-6ff3-4c68-9443-da9d7a06da4a", 00:15:31.067 "is_configured": true, 00:15:31.067 "data_offset": 0, 00:15:31.067 "data_size": 65536 00:15:31.067 }, 00:15:31.067 { 00:15:31.067 "name": "BaseBdev2", 00:15:31.067 "uuid": "73f38524-e038-406d-9f2b-b54fc5ea03ec", 00:15:31.067 "is_configured": true, 00:15:31.067 "data_offset": 0, 00:15:31.067 "data_size": 65536 00:15:31.067 }, 00:15:31.067 { 00:15:31.067 "name": "BaseBdev3", 00:15:31.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.067 "is_configured": false, 00:15:31.067 "data_offset": 0, 00:15:31.067 "data_size": 0 00:15:31.067 } 00:15:31.067 ] 00:15:31.067 }' 00:15:31.067 21:37:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.067 21:37:51 -- common/autotest_common.sh@10 -- # set +x 00:15:31.631 21:37:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.631 [2024-12-06 21:37:52.048909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.631 [2024-12-06 21:37:52.048971] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:15:31.632 [2024-12-06 21:37:52.048986] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:31.632 [2024-12-06 21:37:52.049088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:31.632 [2024-12-06 21:37:52.049448] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:15:31.632 [2024-12-06 21:37:52.049491] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:15:31.632 [2024-12-06 21:37:52.049783] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.632 BaseBdev3 00:15:31.632 21:37:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:31.632 21:37:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:31.632 21:37:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:31.632 21:37:52 -- common/autotest_common.sh@899 -- # local i 00:15:31.632 21:37:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:31.632 21:37:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:31.632 21:37:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.890 21:37:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:32.147 [ 00:15:32.148 { 00:15:32.148 "name": "BaseBdev3", 00:15:32.148 "aliases": [ 00:15:32.148 "323e6b82-99ba-4cdd-9c08-09aed8fc80e3" 00:15:32.148 ], 00:15:32.148 "product_name": "Malloc disk", 00:15:32.148 "block_size": 512, 00:15:32.148 "num_blocks": 65536, 00:15:32.148 "uuid": "323e6b82-99ba-4cdd-9c08-09aed8fc80e3", 00:15:32.148 "assigned_rate_limits": { 00:15:32.148 "rw_ios_per_sec": 0, 00:15:32.148 "rw_mbytes_per_sec": 0, 00:15:32.148 "r_mbytes_per_sec": 0, 00:15:32.148 "w_mbytes_per_sec": 0 00:15:32.148 }, 00:15:32.148 "claimed": true, 00:15:32.148 "claim_type": "exclusive_write", 00:15:32.148 "zoned": false, 00:15:32.148 "supported_io_types": { 00:15:32.148 "read": true, 00:15:32.148 "write": true, 00:15:32.148 "unmap": true, 00:15:32.148 "write_zeroes": true, 00:15:32.148 "flush": true, 00:15:32.148 "reset": true, 00:15:32.148 "compare": false, 00:15:32.148 "compare_and_write": false, 00:15:32.148 "abort": true, 00:15:32.148 "nvme_admin": false, 00:15:32.148 "nvme_io": false 00:15:32.148 }, 00:15:32.148 "memory_domains": [ 00:15:32.148 { 00:15:32.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.148 "dma_device_type": 2 00:15:32.148 } 00:15:32.148 ], 00:15:32.148 "driver_specific": {} 00:15:32.148 } 00:15:32.148 ] 00:15:32.148 21:37:52 -- common/autotest_common.sh@905 -- # return 0 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.148 21:37:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.407 21:37:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:32.407 "name": "Existed_Raid", 00:15:32.407 "uuid": "9b494cb2-d4db-4e8b-ba12-b407f3fb6bea", 00:15:32.407 "strip_size_kb": 64, 00:15:32.407 "state": "online", 00:15:32.407 "raid_level": "raid0", 00:15:32.407 "superblock": false, 00:15:32.407 "num_base_bdevs": 3, 00:15:32.407 "num_base_bdevs_discovered": 3, 00:15:32.407 "num_base_bdevs_operational": 3, 00:15:32.407 "base_bdevs_list": [ 00:15:32.407 { 00:15:32.407 "name": "BaseBdev1", 00:15:32.407 "uuid": "87defdb4-6ff3-4c68-9443-da9d7a06da4a", 00:15:32.407 "is_configured": true, 00:15:32.407 "data_offset": 0, 00:15:32.407 "data_size": 65536 00:15:32.407 }, 00:15:32.407 { 00:15:32.407 "name": "BaseBdev2", 00:15:32.407 "uuid": "73f38524-e038-406d-9f2b-b54fc5ea03ec", 00:15:32.407 "is_configured": true, 00:15:32.407 "data_offset": 0, 00:15:32.407 "data_size": 65536 00:15:32.407 }, 00:15:32.407 { 00:15:32.407 "name": "BaseBdev3", 00:15:32.407 "uuid": "323e6b82-99ba-4cdd-9c08-09aed8fc80e3", 00:15:32.407 "is_configured": true, 00:15:32.407 "data_offset": 0, 00:15:32.407 "data_size": 65536 00:15:32.407 } 00:15:32.407 ] 00:15:32.407 }' 00:15:32.407 21:37:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:32.407 21:37:52 -- common/autotest_common.sh@10 -- # set +x 00:15:32.665 21:37:53 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.923 [2024-12-06 21:37:53.237350] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.923 [2024-12-06 21:37:53.237406] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.923 [2024-12-06 21:37:53.237528] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:32.923 21:37:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.181 21:37:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.181 "name": "Existed_Raid", 00:15:33.181 "uuid": "9b494cb2-d4db-4e8b-ba12-b407f3fb6bea", 00:15:33.181 "strip_size_kb": 64, 00:15:33.181 "state": "offline", 00:15:33.181 "raid_level": "raid0", 00:15:33.181 "superblock": false, 00:15:33.181 "num_base_bdevs": 3, 00:15:33.181 "num_base_bdevs_discovered": 2, 00:15:33.181 "num_base_bdevs_operational": 2, 00:15:33.181 "base_bdevs_list": [ 00:15:33.181 { 00:15:33.181 "name": null, 00:15:33.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.181 "is_configured": false, 00:15:33.181 "data_offset": 0, 00:15:33.181 "data_size": 65536 00:15:33.181 }, 00:15:33.181 { 00:15:33.181 "name": "BaseBdev2", 00:15:33.181 "uuid": "73f38524-e038-406d-9f2b-b54fc5ea03ec", 00:15:33.181 "is_configured": true, 00:15:33.181 "data_offset": 0, 00:15:33.181 "data_size": 65536 00:15:33.181 }, 00:15:33.181 { 00:15:33.181 "name": "BaseBdev3", 00:15:33.181 "uuid": "323e6b82-99ba-4cdd-9c08-09aed8fc80e3", 00:15:33.181 "is_configured": true, 00:15:33.181 "data_offset": 0, 00:15:33.181 "data_size": 65536 00:15:33.181 } 00:15:33.181 ] 00:15:33.181 }' 00:15:33.181 21:37:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.181 21:37:53 -- common/autotest_common.sh@10 -- # set +x 00:15:33.440 21:37:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:33.440 21:37:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.440 21:37:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:33.440 21:37:53 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.698 21:37:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:33.698 21:37:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:33.698 21:37:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:33.956 [2024-12-06 21:37:54.336584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:33.957 21:37:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:33.957 21:37:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.957 21:37:54 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.957 21:37:54 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:34.216 21:37:54 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:34.216 21:37:54 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.216 21:37:54 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:34.475 [2024-12-06 21:37:54.878533] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.475 [2024-12-06 21:37:54.878591] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:15:34.475 21:37:54 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.475 21:37:54 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.475 21:37:54 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.475 21:37:54 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:34.734 21:37:55 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:34.734 21:37:55 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:34.734 21:37:55 -- bdev/bdev_raid.sh@287 -- # killprocess 71219 00:15:34.734 21:37:55 -- common/autotest_common.sh@936 -- # '[' -z 71219 ']' 00:15:34.734 21:37:55 -- common/autotest_common.sh@940 -- # kill -0 71219 00:15:34.734 21:37:55 -- common/autotest_common.sh@941 -- # uname 00:15:34.734 21:37:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.734 21:37:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71219 00:15:34.734 killing process with pid 71219 00:15:34.734 21:37:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.734 21:37:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.734 21:37:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71219' 00:15:34.734 21:37:55 -- common/autotest_common.sh@955 -- # kill 71219 00:15:34.734 [2024-12-06 21:37:55.198301] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.734 21:37:55 -- common/autotest_common.sh@960 -- # wait 71219 00:15:34.734 [2024-12-06 21:37:55.198402] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:36.112 00:15:36.112 real 0m10.115s 00:15:36.112 user 0m16.768s 00:15:36.112 sys 0m1.486s 00:15:36.112 ************************************ 00:15:36.112 END TEST raid_state_function_test 00:15:36.112 ************************************ 00:15:36.112 21:37:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:36.112 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:36.112 21:37:56 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:36.112 21:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.112 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 ************************************ 00:15:36.112 START TEST raid_state_function_test_sb 00:15:36.112 ************************************ 00:15:36.112 21:37:56 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 3 true 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@226 -- # raid_pid=71559 00:15:36.112 Process raid pid: 71559 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 71559' 00:15:36.112 21:37:56 -- bdev/bdev_raid.sh@228 -- # waitforlisten 71559 /var/tmp/spdk-raid.sock 00:15:36.112 21:37:56 -- common/autotest_common.sh@829 -- # '[' -z 71559 ']' 00:15:36.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.112 21:37:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.112 21:37:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.112 21:37:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.112 21:37:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.112 21:37:56 -- common/autotest_common.sh@10 -- # set +x 00:15:36.112 [2024-12-06 21:37:56.355439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:36.112 [2024-12-06 21:37:56.355812] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.113 [2024-12-06 21:37:56.515850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.372 [2024-12-06 21:37:56.682418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.372 [2024-12-06 21:37:56.851754] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.941 21:37:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.941 21:37:57 -- common/autotest_common.sh@862 -- # return 0 00:15:36.941 21:37:57 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:37.200 [2024-12-06 21:37:57.493112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.200 [2024-12-06 21:37:57.493198] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.200 [2024-12-06 21:37:57.493213] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.200 [2024-12-06 21:37:57.493227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.200 [2024-12-06 21:37:57.493236] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.200 [2024-12-06 21:37:57.493248] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.200 21:37:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.459 21:37:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.459 "name": "Existed_Raid", 00:15:37.459 "uuid": "4b3a39a5-faa1-4fb8-b539-ae87cbc14cf2", 00:15:37.459 "strip_size_kb": 64, 00:15:37.459 "state": "configuring", 00:15:37.459 "raid_level": "raid0", 00:15:37.459 "superblock": true, 00:15:37.459 "num_base_bdevs": 3, 00:15:37.459 "num_base_bdevs_discovered": 0, 00:15:37.459 "num_base_bdevs_operational": 3, 00:15:37.459 "base_bdevs_list": [ 00:15:37.459 { 00:15:37.459 "name": "BaseBdev1", 00:15:37.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.459 "is_configured": false, 00:15:37.459 "data_offset": 0, 00:15:37.459 "data_size": 0 00:15:37.459 }, 00:15:37.459 { 00:15:37.459 "name": "BaseBdev2", 00:15:37.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.459 "is_configured": false, 00:15:37.459 "data_offset": 0, 00:15:37.459 "data_size": 0 00:15:37.459 }, 00:15:37.459 { 00:15:37.459 "name": "BaseBdev3", 00:15:37.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.459 "is_configured": false, 00:15:37.459 "data_offset": 0, 00:15:37.459 "data_size": 0 00:15:37.459 } 00:15:37.459 ] 00:15:37.459 }' 00:15:37.459 21:37:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.459 21:37:57 -- common/autotest_common.sh@10 -- # set +x 00:15:37.719 21:37:58 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:37.978 [2024-12-06 21:37:58.321267] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.978 [2024-12-06 21:37:58.321333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:37.978 21:37:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:38.237 [2024-12-06 21:37:58.549393] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.237 [2024-12-06 21:37:58.549685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.237 [2024-12-06 21:37:58.549711] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.237 [2024-12-06 21:37:58.549729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.237 [2024-12-06 21:37:58.549738] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.237 [2024-12-06 21:37:58.549750] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.237 21:37:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.496 [2024-12-06 21:37:58.822366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.496 BaseBdev1 00:15:38.496 21:37:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:38.496 21:37:58 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:38.496 21:37:58 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.496 21:37:58 -- common/autotest_common.sh@899 -- # local i 00:15:38.496 21:37:58 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.496 21:37:58 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.496 21:37:58 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.755 21:37:59 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.755 [ 00:15:38.755 { 00:15:38.755 "name": "BaseBdev1", 00:15:38.755 "aliases": [ 00:15:38.755 "8d1e2d29-8586-4513-a3b9-44b1ecce9170" 00:15:38.755 ], 00:15:38.755 "product_name": "Malloc disk", 00:15:38.755 "block_size": 512, 00:15:38.755 "num_blocks": 65536, 00:15:38.755 "uuid": "8d1e2d29-8586-4513-a3b9-44b1ecce9170", 00:15:38.755 "assigned_rate_limits": { 00:15:38.755 "rw_ios_per_sec": 0, 00:15:38.755 "rw_mbytes_per_sec": 0, 00:15:38.755 "r_mbytes_per_sec": 0, 00:15:38.755 "w_mbytes_per_sec": 0 00:15:38.755 }, 00:15:38.755 "claimed": true, 00:15:38.755 "claim_type": "exclusive_write", 00:15:38.755 "zoned": false, 00:15:38.755 "supported_io_types": { 00:15:38.755 "read": true, 00:15:38.755 "write": true, 00:15:38.755 "unmap": true, 00:15:38.755 "write_zeroes": true, 00:15:38.755 "flush": true, 00:15:38.755 "reset": true, 00:15:38.755 "compare": false, 00:15:38.755 "compare_and_write": false, 00:15:38.755 "abort": true, 00:15:38.755 "nvme_admin": false, 00:15:38.755 "nvme_io": false 00:15:38.755 }, 00:15:38.755 "memory_domains": [ 00:15:38.755 { 00:15:38.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.755 "dma_device_type": 2 00:15:38.755 } 00:15:38.755 ], 00:15:38.755 "driver_specific": {} 00:15:38.755 } 00:15:38.755 ] 00:15:38.755 21:37:59 -- common/autotest_common.sh@905 -- # return 0 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.755 21:37:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.014 21:37:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.014 "name": "Existed_Raid", 00:15:39.014 "uuid": "a646b79a-f63f-44a2-9d30-e16de068d0dc", 00:15:39.014 "strip_size_kb": 64, 00:15:39.014 "state": "configuring", 00:15:39.014 "raid_level": "raid0", 00:15:39.014 "superblock": true, 00:15:39.014 "num_base_bdevs": 3, 00:15:39.014 "num_base_bdevs_discovered": 1, 00:15:39.014 "num_base_bdevs_operational": 3, 00:15:39.014 "base_bdevs_list": [ 00:15:39.014 { 00:15:39.014 "name": "BaseBdev1", 00:15:39.014 "uuid": "8d1e2d29-8586-4513-a3b9-44b1ecce9170", 00:15:39.014 "is_configured": true, 00:15:39.014 "data_offset": 2048, 00:15:39.014 "data_size": 63488 00:15:39.014 }, 00:15:39.014 { 00:15:39.014 "name": "BaseBdev2", 00:15:39.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.014 "is_configured": false, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 0 00:15:39.014 }, 00:15:39.014 { 00:15:39.014 "name": "BaseBdev3", 00:15:39.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.014 "is_configured": false, 00:15:39.014 "data_offset": 0, 00:15:39.014 "data_size": 0 00:15:39.014 } 00:15:39.014 ] 00:15:39.014 }' 00:15:39.014 21:37:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.014 21:37:59 -- common/autotest_common.sh@10 -- # set +x 00:15:39.272 21:37:59 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:39.532 [2024-12-06 21:37:59.974743] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.532 [2024-12-06 21:37:59.974809] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:15:39.532 21:37:59 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:39.532 21:37:59 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:39.790 21:38:00 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:40.049 BaseBdev1 00:15:40.049 21:38:00 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:40.049 21:38:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:40.049 21:38:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:40.049 21:38:00 -- common/autotest_common.sh@899 -- # local i 00:15:40.049 21:38:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:40.049 21:38:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:40.049 21:38:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:40.308 21:38:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.567 [ 00:15:40.567 { 00:15:40.567 "name": "BaseBdev1", 00:15:40.567 "aliases": [ 00:15:40.567 "7ad58a07-026f-4c13-8555-2feecb85a44a" 00:15:40.567 ], 00:15:40.567 "product_name": "Malloc disk", 00:15:40.567 "block_size": 512, 00:15:40.567 "num_blocks": 65536, 00:15:40.567 "uuid": "7ad58a07-026f-4c13-8555-2feecb85a44a", 00:15:40.567 "assigned_rate_limits": { 00:15:40.567 "rw_ios_per_sec": 0, 00:15:40.567 "rw_mbytes_per_sec": 0, 00:15:40.567 "r_mbytes_per_sec": 0, 00:15:40.567 "w_mbytes_per_sec": 0 00:15:40.567 }, 00:15:40.567 "claimed": false, 00:15:40.567 "zoned": false, 00:15:40.567 "supported_io_types": { 00:15:40.567 "read": true, 00:15:40.567 "write": true, 00:15:40.567 "unmap": true, 00:15:40.567 "write_zeroes": true, 00:15:40.567 "flush": true, 00:15:40.567 "reset": true, 00:15:40.567 "compare": false, 00:15:40.567 "compare_and_write": false, 00:15:40.567 "abort": true, 00:15:40.567 "nvme_admin": false, 00:15:40.567 "nvme_io": false 00:15:40.567 }, 00:15:40.567 "memory_domains": [ 00:15:40.567 { 00:15:40.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.567 "dma_device_type": 2 00:15:40.567 } 00:15:40.567 ], 00:15:40.567 "driver_specific": {} 00:15:40.567 } 00:15:40.567 ] 00:15:40.567 21:38:00 -- common/autotest_common.sh@905 -- # return 0 00:15:40.567 21:38:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:40.826 [2024-12-06 21:38:01.148926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.826 [2024-12-06 21:38:01.150831] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.826 [2024-12-06 21:38:01.150893] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.826 [2024-12-06 21:38:01.150906] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.826 [2024-12-06 21:38:01.150919] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.826 21:38:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.084 21:38:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:41.084 "name": "Existed_Raid", 00:15:41.084 "uuid": "10ef8e4e-e414-48d8-b080-6eb5fc21d5e3", 00:15:41.084 "strip_size_kb": 64, 00:15:41.084 "state": "configuring", 00:15:41.084 "raid_level": "raid0", 00:15:41.084 "superblock": true, 00:15:41.084 "num_base_bdevs": 3, 00:15:41.084 "num_base_bdevs_discovered": 1, 00:15:41.084 "num_base_bdevs_operational": 3, 00:15:41.084 "base_bdevs_list": [ 00:15:41.084 { 00:15:41.084 "name": "BaseBdev1", 00:15:41.084 "uuid": "7ad58a07-026f-4c13-8555-2feecb85a44a", 00:15:41.084 "is_configured": true, 00:15:41.084 "data_offset": 2048, 00:15:41.084 "data_size": 63488 00:15:41.084 }, 00:15:41.084 { 00:15:41.084 "name": "BaseBdev2", 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.084 "is_configured": false, 00:15:41.084 "data_offset": 0, 00:15:41.084 "data_size": 0 00:15:41.084 }, 00:15:41.084 { 00:15:41.084 "name": "BaseBdev3", 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.084 "is_configured": false, 00:15:41.084 "data_offset": 0, 00:15:41.084 "data_size": 0 00:15:41.084 } 00:15:41.084 ] 00:15:41.084 }' 00:15:41.084 21:38:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:41.084 21:38:01 -- common/autotest_common.sh@10 -- # set +x 00:15:41.391 21:38:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.662 [2024-12-06 21:38:02.035425] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.662 BaseBdev2 00:15:41.662 21:38:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:41.662 21:38:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:41.662 21:38:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:41.662 21:38:02 -- common/autotest_common.sh@899 -- # local i 00:15:41.662 21:38:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:41.662 21:38:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:41.662 21:38:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.920 21:38:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:42.178 [ 00:15:42.178 { 00:15:42.178 "name": "BaseBdev2", 00:15:42.178 "aliases": [ 00:15:42.178 "fefeee59-2114-4307-96a6-354e59ad2516" 00:15:42.178 ], 00:15:42.178 "product_name": "Malloc disk", 00:15:42.178 "block_size": 512, 00:15:42.178 "num_blocks": 65536, 00:15:42.178 "uuid": "fefeee59-2114-4307-96a6-354e59ad2516", 00:15:42.178 "assigned_rate_limits": { 00:15:42.178 "rw_ios_per_sec": 0, 00:15:42.178 "rw_mbytes_per_sec": 0, 00:15:42.178 "r_mbytes_per_sec": 0, 00:15:42.178 "w_mbytes_per_sec": 0 00:15:42.178 }, 00:15:42.178 "claimed": true, 00:15:42.178 "claim_type": "exclusive_write", 00:15:42.178 "zoned": false, 00:15:42.178 "supported_io_types": { 00:15:42.178 "read": true, 00:15:42.178 "write": true, 00:15:42.178 "unmap": true, 00:15:42.178 "write_zeroes": true, 00:15:42.178 "flush": true, 00:15:42.178 "reset": true, 00:15:42.178 "compare": false, 00:15:42.178 "compare_and_write": false, 00:15:42.178 "abort": true, 00:15:42.178 "nvme_admin": false, 00:15:42.178 "nvme_io": false 00:15:42.178 }, 00:15:42.178 "memory_domains": [ 00:15:42.178 { 00:15:42.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.178 "dma_device_type": 2 00:15:42.178 } 00:15:42.178 ], 00:15:42.178 "driver_specific": {} 00:15:42.178 } 00:15:42.178 ] 00:15:42.178 21:38:02 -- common/autotest_common.sh@905 -- # return 0 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.178 21:38:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.437 21:38:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.437 "name": "Existed_Raid", 00:15:42.437 "uuid": "10ef8e4e-e414-48d8-b080-6eb5fc21d5e3", 00:15:42.437 "strip_size_kb": 64, 00:15:42.437 "state": "configuring", 00:15:42.437 "raid_level": "raid0", 00:15:42.437 "superblock": true, 00:15:42.437 "num_base_bdevs": 3, 00:15:42.437 "num_base_bdevs_discovered": 2, 00:15:42.437 "num_base_bdevs_operational": 3, 00:15:42.437 "base_bdevs_list": [ 00:15:42.437 { 00:15:42.437 "name": "BaseBdev1", 00:15:42.437 "uuid": "7ad58a07-026f-4c13-8555-2feecb85a44a", 00:15:42.437 "is_configured": true, 00:15:42.437 "data_offset": 2048, 00:15:42.437 "data_size": 63488 00:15:42.437 }, 00:15:42.437 { 00:15:42.437 "name": "BaseBdev2", 00:15:42.437 "uuid": "fefeee59-2114-4307-96a6-354e59ad2516", 00:15:42.437 "is_configured": true, 00:15:42.437 "data_offset": 2048, 00:15:42.437 "data_size": 63488 00:15:42.437 }, 00:15:42.437 { 00:15:42.437 "name": "BaseBdev3", 00:15:42.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.437 "is_configured": false, 00:15:42.437 "data_offset": 0, 00:15:42.437 "data_size": 0 00:15:42.437 } 00:15:42.437 ] 00:15:42.437 }' 00:15:42.437 21:38:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.437 21:38:02 -- common/autotest_common.sh@10 -- # set +x 00:15:42.696 21:38:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:42.955 [2024-12-06 21:38:03.247955] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.955 [2024-12-06 21:38:03.248445] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:15:42.955 [2024-12-06 21:38:03.248682] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:42.955 [2024-12-06 21:38:03.248841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:42.955 [2024-12-06 21:38:03.249232] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:15:42.955 BaseBdev3 00:15:42.955 [2024-12-06 21:38:03.249357] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:15:42.955 [2024-12-06 21:38:03.249565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.955 21:38:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:42.955 21:38:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:42.955 21:38:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.955 21:38:03 -- common/autotest_common.sh@899 -- # local i 00:15:42.955 21:38:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.955 21:38:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.955 21:38:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.214 21:38:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.473 [ 00:15:43.473 { 00:15:43.473 "name": "BaseBdev3", 00:15:43.473 "aliases": [ 00:15:43.473 "9d6e8f94-2ce2-4718-a334-4dc2fb6b3493" 00:15:43.473 ], 00:15:43.473 "product_name": "Malloc disk", 00:15:43.473 "block_size": 512, 00:15:43.473 "num_blocks": 65536, 00:15:43.473 "uuid": "9d6e8f94-2ce2-4718-a334-4dc2fb6b3493", 00:15:43.473 "assigned_rate_limits": { 00:15:43.473 "rw_ios_per_sec": 0, 00:15:43.473 "rw_mbytes_per_sec": 0, 00:15:43.473 "r_mbytes_per_sec": 0, 00:15:43.473 "w_mbytes_per_sec": 0 00:15:43.473 }, 00:15:43.473 "claimed": true, 00:15:43.473 "claim_type": "exclusive_write", 00:15:43.473 "zoned": false, 00:15:43.473 "supported_io_types": { 00:15:43.473 "read": true, 00:15:43.473 "write": true, 00:15:43.473 "unmap": true, 00:15:43.473 "write_zeroes": true, 00:15:43.473 "flush": true, 00:15:43.473 "reset": true, 00:15:43.473 "compare": false, 00:15:43.473 "compare_and_write": false, 00:15:43.473 "abort": true, 00:15:43.473 "nvme_admin": false, 00:15:43.473 "nvme_io": false 00:15:43.473 }, 00:15:43.473 "memory_domains": [ 00:15:43.473 { 00:15:43.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.473 "dma_device_type": 2 00:15:43.473 } 00:15:43.473 ], 00:15:43.473 "driver_specific": {} 00:15:43.473 } 00:15:43.473 ] 00:15:43.473 21:38:03 -- common/autotest_common.sh@905 -- # return 0 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.473 21:38:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.732 21:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.732 "name": "Existed_Raid", 00:15:43.732 "uuid": "10ef8e4e-e414-48d8-b080-6eb5fc21d5e3", 00:15:43.732 "strip_size_kb": 64, 00:15:43.732 "state": "online", 00:15:43.732 "raid_level": "raid0", 00:15:43.732 "superblock": true, 00:15:43.732 "num_base_bdevs": 3, 00:15:43.732 "num_base_bdevs_discovered": 3, 00:15:43.732 "num_base_bdevs_operational": 3, 00:15:43.732 "base_bdevs_list": [ 00:15:43.732 { 00:15:43.732 "name": "BaseBdev1", 00:15:43.732 "uuid": "7ad58a07-026f-4c13-8555-2feecb85a44a", 00:15:43.732 "is_configured": true, 00:15:43.732 "data_offset": 2048, 00:15:43.732 "data_size": 63488 00:15:43.732 }, 00:15:43.732 { 00:15:43.732 "name": "BaseBdev2", 00:15:43.732 "uuid": "fefeee59-2114-4307-96a6-354e59ad2516", 00:15:43.732 "is_configured": true, 00:15:43.732 "data_offset": 2048, 00:15:43.732 "data_size": 63488 00:15:43.732 }, 00:15:43.732 { 00:15:43.732 "name": "BaseBdev3", 00:15:43.732 "uuid": "9d6e8f94-2ce2-4718-a334-4dc2fb6b3493", 00:15:43.732 "is_configured": true, 00:15:43.732 "data_offset": 2048, 00:15:43.732 "data_size": 63488 00:15:43.732 } 00:15:43.732 ] 00:15:43.732 }' 00:15:43.732 21:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.732 21:38:04 -- common/autotest_common.sh@10 -- # set +x 00:15:43.991 21:38:04 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:43.991 [2024-12-06 21:38:04.448441] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.991 [2024-12-06 21:38:04.448685] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.991 [2024-12-06 21:38:04.448891] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.250 21:38:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.509 21:38:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:44.509 "name": "Existed_Raid", 00:15:44.509 "uuid": "10ef8e4e-e414-48d8-b080-6eb5fc21d5e3", 00:15:44.509 "strip_size_kb": 64, 00:15:44.509 "state": "offline", 00:15:44.509 "raid_level": "raid0", 00:15:44.509 "superblock": true, 00:15:44.510 "num_base_bdevs": 3, 00:15:44.510 "num_base_bdevs_discovered": 2, 00:15:44.510 "num_base_bdevs_operational": 2, 00:15:44.510 "base_bdevs_list": [ 00:15:44.510 { 00:15:44.510 "name": null, 00:15:44.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.510 "is_configured": false, 00:15:44.510 "data_offset": 2048, 00:15:44.510 "data_size": 63488 00:15:44.510 }, 00:15:44.510 { 00:15:44.510 "name": "BaseBdev2", 00:15:44.510 "uuid": "fefeee59-2114-4307-96a6-354e59ad2516", 00:15:44.510 "is_configured": true, 00:15:44.510 "data_offset": 2048, 00:15:44.510 "data_size": 63488 00:15:44.510 }, 00:15:44.510 { 00:15:44.510 "name": "BaseBdev3", 00:15:44.510 "uuid": "9d6e8f94-2ce2-4718-a334-4dc2fb6b3493", 00:15:44.510 "is_configured": true, 00:15:44.510 "data_offset": 2048, 00:15:44.510 "data_size": 63488 00:15:44.510 } 00:15:44.510 ] 00:15:44.510 }' 00:15:44.510 21:38:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:44.510 21:38:04 -- common/autotest_common.sh@10 -- # set +x 00:15:44.784 21:38:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:44.784 21:38:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:44.784 21:38:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.784 21:38:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:45.047 21:38:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:45.047 21:38:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.047 21:38:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:45.048 [2024-12-06 21:38:05.484181] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.305 21:38:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:45.305 21:38:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:45.305 21:38:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.305 21:38:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:45.563 21:38:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:45.563 21:38:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.563 21:38:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:45.563 [2024-12-06 21:38:06.054209] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.563 [2024-12-06 21:38:06.054468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:15:45.822 21:38:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:45.822 21:38:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:45.822 21:38:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.822 21:38:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:46.081 21:38:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:46.081 21:38:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:46.081 21:38:06 -- bdev/bdev_raid.sh@287 -- # killprocess 71559 00:15:46.081 21:38:06 -- common/autotest_common.sh@936 -- # '[' -z 71559 ']' 00:15:46.081 21:38:06 -- common/autotest_common.sh@940 -- # kill -0 71559 00:15:46.081 21:38:06 -- common/autotest_common.sh@941 -- # uname 00:15:46.081 21:38:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:46.081 21:38:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71559 00:15:46.081 killing process with pid 71559 00:15:46.081 21:38:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:46.081 21:38:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:46.081 21:38:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71559' 00:15:46.081 21:38:06 -- common/autotest_common.sh@955 -- # kill 71559 00:15:46.081 [2024-12-06 21:38:06.377579] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:46.081 21:38:06 -- common/autotest_common.sh@960 -- # wait 71559 00:15:46.081 [2024-12-06 21:38:06.377685] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:47.019 00:15:47.019 real 0m11.105s 00:15:47.019 user 0m18.536s 00:15:47.019 sys 0m1.593s 00:15:47.019 21:38:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:47.019 ************************************ 00:15:47.019 END TEST raid_state_function_test_sb 00:15:47.019 ************************************ 00:15:47.019 21:38:07 -- common/autotest_common.sh@10 -- # set +x 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:47.019 21:38:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:15:47.019 21:38:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:47.019 21:38:07 -- common/autotest_common.sh@10 -- # set +x 00:15:47.019 ************************************ 00:15:47.019 START TEST raid_superblock_test 00:15:47.019 ************************************ 00:15:47.019 21:38:07 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 3 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@357 -- # raid_pid=71913 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@358 -- # waitforlisten 71913 /var/tmp/spdk-raid.sock 00:15:47.019 21:38:07 -- common/autotest_common.sh@829 -- # '[' -z 71913 ']' 00:15:47.019 21:38:07 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:47.019 21:38:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:47.019 21:38:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:47.019 21:38:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:47.019 21:38:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:47.019 21:38:07 -- common/autotest_common.sh@10 -- # set +x 00:15:47.278 [2024-12-06 21:38:07.518519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:47.279 [2024-12-06 21:38:07.518692] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71913 ] 00:15:47.279 [2024-12-06 21:38:07.687894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.537 [2024-12-06 21:38:07.855483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.537 [2024-12-06 21:38:08.014758] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.106 21:38:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:48.106 21:38:08 -- common/autotest_common.sh@862 -- # return 0 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.106 21:38:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:48.364 malloc1 00:15:48.364 21:38:08 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.624 [2024-12-06 21:38:08.868054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.624 [2024-12-06 21:38:08.868181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.624 [2024-12-06 21:38:08.868227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:15:48.624 [2024-12-06 21:38:08.868242] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.624 [2024-12-06 21:38:08.870598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.624 [2024-12-06 21:38:08.870634] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.624 pt1 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.624 21:38:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:48.883 malloc2 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.883 [2024-12-06 21:38:09.351183] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.883 [2024-12-06 21:38:09.351259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.883 [2024-12-06 21:38:09.351289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:15:48.883 [2024-12-06 21:38:09.351302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.883 [2024-12-06 21:38:09.353860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.883 [2024-12-06 21:38:09.353910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.883 pt2 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.883 21:38:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:49.142 malloc3 00:15:49.142 21:38:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:49.401 [2024-12-06 21:38:09.761773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:49.401 [2024-12-06 21:38:09.761859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.401 [2024-12-06 21:38:09.761903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:15:49.401 [2024-12-06 21:38:09.761916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.401 [2024-12-06 21:38:09.764092] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.401 [2024-12-06 21:38:09.764168] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:49.401 pt3 00:15:49.401 21:38:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:49.401 21:38:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:49.401 21:38:09 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:49.660 [2024-12-06 21:38:09.957851] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.660 [2024-12-06 21:38:09.959906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.660 [2024-12-06 21:38:09.960000] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:49.660 [2024-12-06 21:38:09.960235] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:15:49.660 [2024-12-06 21:38:09.960258] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:49.660 [2024-12-06 21:38:09.960385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:15:49.660 [2024-12-06 21:38:09.960839] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:15:49.660 [2024-12-06 21:38:09.960857] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:15:49.660 [2024-12-06 21:38:09.961027] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.660 21:38:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.919 21:38:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.919 "name": "raid_bdev1", 00:15:49.919 "uuid": "1f433b1a-8a64-419b-a56f-f6fe37cfcfb9", 00:15:49.919 "strip_size_kb": 64, 00:15:49.919 "state": "online", 00:15:49.919 "raid_level": "raid0", 00:15:49.919 "superblock": true, 00:15:49.919 "num_base_bdevs": 3, 00:15:49.919 "num_base_bdevs_discovered": 3, 00:15:49.919 "num_base_bdevs_operational": 3, 00:15:49.919 "base_bdevs_list": [ 00:15:49.919 { 00:15:49.919 "name": "pt1", 00:15:49.919 "uuid": "04332b06-799e-5e52-b84b-05c965ee9444", 00:15:49.919 "is_configured": true, 00:15:49.919 "data_offset": 2048, 00:15:49.919 "data_size": 63488 00:15:49.919 }, 00:15:49.919 { 00:15:49.919 "name": "pt2", 00:15:49.919 "uuid": "8b1fda48-ad9f-5930-a8b8-2076e231b02d", 00:15:49.919 "is_configured": true, 00:15:49.919 "data_offset": 2048, 00:15:49.919 "data_size": 63488 00:15:49.919 }, 00:15:49.919 { 00:15:49.919 "name": "pt3", 00:15:49.919 "uuid": "eb5af325-db15-5793-a9ca-c6434e5e9a37", 00:15:49.919 "is_configured": true, 00:15:49.919 "data_offset": 2048, 00:15:49.919 "data_size": 63488 00:15:49.919 } 00:15:49.919 ] 00:15:49.919 }' 00:15:49.919 21:38:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.919 21:38:10 -- common/autotest_common.sh@10 -- # set +x 00:15:50.178 21:38:10 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:50.178 21:38:10 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:50.438 [2024-12-06 21:38:10.714183] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.438 21:38:10 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1f433b1a-8a64-419b-a56f-f6fe37cfcfb9 00:15:50.438 21:38:10 -- bdev/bdev_raid.sh@380 -- # '[' -z 1f433b1a-8a64-419b-a56f-f6fe37cfcfb9 ']' 00:15:50.438 21:38:10 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:50.438 [2024-12-06 21:38:10.910023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.438 [2024-12-06 21:38:10.910054] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.438 [2024-12-06 21:38:10.910156] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.438 [2024-12-06 21:38:10.910220] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.438 [2024-12-06 21:38:10.910236] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:15:50.438 21:38:10 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.438 21:38:10 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:50.697 21:38:11 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:50.697 21:38:11 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:50.697 21:38:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.697 21:38:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:50.957 21:38:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:50.957 21:38:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:51.216 21:38:11 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:51.216 21:38:11 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:51.474 21:38:11 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:51.474 21:38:11 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:51.732 21:38:12 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:51.732 21:38:12 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.732 21:38:12 -- common/autotest_common.sh@650 -- # local es=0 00:15:51.732 21:38:12 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.732 21:38:12 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.732 21:38:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.732 21:38:12 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.732 21:38:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.732 21:38:12 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.732 21:38:12 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.732 21:38:12 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:51.732 21:38:12 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:51.732 21:38:12 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:51.991 [2024-12-06 21:38:12.298336] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:51.991 [2024-12-06 21:38:12.300633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:51.991 [2024-12-06 21:38:12.300707] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:51.991 [2024-12-06 21:38:12.300783] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:51.991 [2024-12-06 21:38:12.300850] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:51.991 [2024-12-06 21:38:12.300878] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:51.991 [2024-12-06 21:38:12.300897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.991 [2024-12-06 21:38:12.300933] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:15:51.991 request: 00:15:51.991 { 00:15:51.991 "name": "raid_bdev1", 00:15:51.991 "raid_level": "raid0", 00:15:51.991 "base_bdevs": [ 00:15:51.991 "malloc1", 00:15:51.991 "malloc2", 00:15:51.991 "malloc3" 00:15:51.991 ], 00:15:51.991 "superblock": false, 00:15:51.991 "strip_size_kb": 64, 00:15:51.991 "method": "bdev_raid_create", 00:15:51.991 "req_id": 1 00:15:51.991 } 00:15:51.991 Got JSON-RPC error response 00:15:51.991 response: 00:15:51.991 { 00:15:51.991 "code": -17, 00:15:51.991 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:51.991 } 00:15:51.991 21:38:12 -- common/autotest_common.sh@653 -- # es=1 00:15:51.991 21:38:12 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.991 21:38:12 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.991 21:38:12 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.991 21:38:12 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.991 21:38:12 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:52.249 21:38:12 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:52.249 21:38:12 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:52.249 21:38:12 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:52.507 [2024-12-06 21:38:12.794413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:52.507 [2024-12-06 21:38:12.794519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.507 [2024-12-06 21:38:12.794546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:15:52.507 [2024-12-06 21:38:12.794562] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.507 [2024-12-06 21:38:12.796998] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.507 [2024-12-06 21:38:12.797052] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:52.507 [2024-12-06 21:38:12.797144] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:52.507 [2024-12-06 21:38:12.797204] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.507 pt1 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.507 21:38:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.766 21:38:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.766 "name": "raid_bdev1", 00:15:52.766 "uuid": "1f433b1a-8a64-419b-a56f-f6fe37cfcfb9", 00:15:52.766 "strip_size_kb": 64, 00:15:52.766 "state": "configuring", 00:15:52.766 "raid_level": "raid0", 00:15:52.766 "superblock": true, 00:15:52.766 "num_base_bdevs": 3, 00:15:52.766 "num_base_bdevs_discovered": 1, 00:15:52.766 "num_base_bdevs_operational": 3, 00:15:52.766 "base_bdevs_list": [ 00:15:52.766 { 00:15:52.766 "name": "pt1", 00:15:52.766 "uuid": "04332b06-799e-5e52-b84b-05c965ee9444", 00:15:52.766 "is_configured": true, 00:15:52.766 "data_offset": 2048, 00:15:52.766 "data_size": 63488 00:15:52.766 }, 00:15:52.766 { 00:15:52.766 "name": null, 00:15:52.766 "uuid": "8b1fda48-ad9f-5930-a8b8-2076e231b02d", 00:15:52.766 "is_configured": false, 00:15:52.766 "data_offset": 2048, 00:15:52.766 "data_size": 63488 00:15:52.766 }, 00:15:52.766 { 00:15:52.766 "name": null, 00:15:52.766 "uuid": "eb5af325-db15-5793-a9ca-c6434e5e9a37", 00:15:52.766 "is_configured": false, 00:15:52.766 "data_offset": 2048, 00:15:52.766 "data_size": 63488 00:15:52.766 } 00:15:52.766 ] 00:15:52.766 }' 00:15:52.766 21:38:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.766 21:38:13 -- common/autotest_common.sh@10 -- # set +x 00:15:53.024 21:38:13 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:53.024 21:38:13 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:53.024 [2024-12-06 21:38:13.490576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:53.024 [2024-12-06 21:38:13.490651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.024 [2024-12-06 21:38:13.490678] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:15:53.024 [2024-12-06 21:38:13.490694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.024 [2024-12-06 21:38:13.491162] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.024 [2024-12-06 21:38:13.491195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:53.024 [2024-12-06 21:38:13.491299] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:53.024 [2024-12-06 21:38:13.491327] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.024 pt2 00:15:53.024 21:38:13 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:53.282 [2024-12-06 21:38:13.742619] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.283 21:38:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.540 21:38:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:53.541 "name": "raid_bdev1", 00:15:53.541 "uuid": "1f433b1a-8a64-419b-a56f-f6fe37cfcfb9", 00:15:53.541 "strip_size_kb": 64, 00:15:53.541 "state": "configuring", 00:15:53.541 "raid_level": "raid0", 00:15:53.541 "superblock": true, 00:15:53.541 "num_base_bdevs": 3, 00:15:53.541 "num_base_bdevs_discovered": 1, 00:15:53.541 "num_base_bdevs_operational": 3, 00:15:53.541 "base_bdevs_list": [ 00:15:53.541 { 00:15:53.541 "name": "pt1", 00:15:53.541 "uuid": "04332b06-799e-5e52-b84b-05c965ee9444", 00:15:53.541 "is_configured": true, 00:15:53.541 "data_offset": 2048, 00:15:53.541 "data_size": 63488 00:15:53.541 }, 00:15:53.541 { 00:15:53.541 "name": null, 00:15:53.541 "uuid": "8b1fda48-ad9f-5930-a8b8-2076e231b02d", 00:15:53.541 "is_configured": false, 00:15:53.541 "data_offset": 2048, 00:15:53.541 "data_size": 63488 00:15:53.541 }, 00:15:53.541 { 00:15:53.541 "name": null, 00:15:53.541 "uuid": "eb5af325-db15-5793-a9ca-c6434e5e9a37", 00:15:53.541 "is_configured": false, 00:15:53.541 "data_offset": 2048, 00:15:53.541 "data_size": 63488 00:15:53.541 } 00:15:53.541 ] 00:15:53.541 }' 00:15:53.541 21:38:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:53.541 21:38:13 -- common/autotest_common.sh@10 -- # set +x 00:15:53.799 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:53.799 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:53.799 21:38:14 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:54.057 [2024-12-06 21:38:14.406819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:54.057 [2024-12-06 21:38:14.406897] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.057 [2024-12-06 21:38:14.406926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:15:54.057 [2024-12-06 21:38:14.406939] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.057 [2024-12-06 21:38:14.407405] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.057 [2024-12-06 21:38:14.407427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:54.057 [2024-12-06 21:38:14.407551] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:54.057 [2024-12-06 21:38:14.407577] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:54.057 pt2 00:15:54.057 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:54.057 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:54.057 21:38:14 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:54.315 [2024-12-06 21:38:14.666918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:54.315 [2024-12-06 21:38:14.666987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.315 [2024-12-06 21:38:14.667014] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:15:54.315 [2024-12-06 21:38:14.667027] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.315 [2024-12-06 21:38:14.667539] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.315 [2024-12-06 21:38:14.667569] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:54.315 [2024-12-06 21:38:14.667681] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:54.315 [2024-12-06 21:38:14.667722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:54.315 [2024-12-06 21:38:14.667886] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:15:54.315 [2024-12-06 21:38:14.667899] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:54.315 [2024-12-06 21:38:14.667996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:15:54.315 [2024-12-06 21:38:14.668375] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:15:54.315 [2024-12-06 21:38:14.668395] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:15:54.315 [2024-12-06 21:38:14.668588] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.315 pt3 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.315 21:38:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.573 21:38:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:54.573 "name": "raid_bdev1", 00:15:54.573 "uuid": "1f433b1a-8a64-419b-a56f-f6fe37cfcfb9", 00:15:54.573 "strip_size_kb": 64, 00:15:54.573 "state": "online", 00:15:54.573 "raid_level": "raid0", 00:15:54.573 "superblock": true, 00:15:54.573 "num_base_bdevs": 3, 00:15:54.573 "num_base_bdevs_discovered": 3, 00:15:54.573 "num_base_bdevs_operational": 3, 00:15:54.573 "base_bdevs_list": [ 00:15:54.573 { 00:15:54.573 "name": "pt1", 00:15:54.573 "uuid": "04332b06-799e-5e52-b84b-05c965ee9444", 00:15:54.573 "is_configured": true, 00:15:54.573 "data_offset": 2048, 00:15:54.573 "data_size": 63488 00:15:54.573 }, 00:15:54.573 { 00:15:54.573 "name": "pt2", 00:15:54.573 "uuid": "8b1fda48-ad9f-5930-a8b8-2076e231b02d", 00:15:54.573 "is_configured": true, 00:15:54.573 "data_offset": 2048, 00:15:54.573 "data_size": 63488 00:15:54.573 }, 00:15:54.573 { 00:15:54.573 "name": "pt3", 00:15:54.573 "uuid": "eb5af325-db15-5793-a9ca-c6434e5e9a37", 00:15:54.574 "is_configured": true, 00:15:54.574 "data_offset": 2048, 00:15:54.574 "data_size": 63488 00:15:54.574 } 00:15:54.574 ] 00:15:54.574 }' 00:15:54.574 21:38:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:54.574 21:38:14 -- common/autotest_common.sh@10 -- # set +x 00:15:54.831 21:38:15 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:54.831 21:38:15 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:55.089 [2024-12-06 21:38:15.435504] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:55.089 21:38:15 -- bdev/bdev_raid.sh@430 -- # '[' 1f433b1a-8a64-419b-a56f-f6fe37cfcfb9 '!=' 1f433b1a-8a64-419b-a56f-f6fe37cfcfb9 ']' 00:15:55.089 21:38:15 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:55.089 21:38:15 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:55.089 21:38:15 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:55.089 21:38:15 -- bdev/bdev_raid.sh@511 -- # killprocess 71913 00:15:55.089 21:38:15 -- common/autotest_common.sh@936 -- # '[' -z 71913 ']' 00:15:55.089 21:38:15 -- common/autotest_common.sh@940 -- # kill -0 71913 00:15:55.089 21:38:15 -- common/autotest_common.sh@941 -- # uname 00:15:55.089 21:38:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.089 21:38:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71913 00:15:55.089 21:38:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:55.089 killing process with pid 71913 00:15:55.089 21:38:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:55.089 21:38:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71913' 00:15:55.089 21:38:15 -- common/autotest_common.sh@955 -- # kill 71913 00:15:55.089 [2024-12-06 21:38:15.485036] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:55.089 21:38:15 -- common/autotest_common.sh@960 -- # wait 71913 00:15:55.089 [2024-12-06 21:38:15.485117] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:55.089 [2024-12-06 21:38:15.485179] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:55.089 [2024-12-06 21:38:15.485195] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:15:55.360 [2024-12-06 21:38:15.703893] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:56.309 00:15:56.309 real 0m9.279s 00:15:56.309 user 0m15.315s 00:15:56.309 sys 0m1.261s 00:15:56.309 21:38:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:56.309 21:38:16 -- common/autotest_common.sh@10 -- # set +x 00:15:56.309 ************************************ 00:15:56.309 END TEST raid_superblock_test 00:15:56.309 ************************************ 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:15:56.309 21:38:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:56.309 21:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:56.309 21:38:16 -- common/autotest_common.sh@10 -- # set +x 00:15:56.309 ************************************ 00:15:56.309 START TEST raid_state_function_test 00:15:56.309 ************************************ 00:15:56.309 21:38:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 false 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:56.309 21:38:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:56.568 Process raid pid: 72189 00:15:56.568 21:38:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=72189 00:15:56.569 21:38:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72189' 00:15:56.569 21:38:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72189 /var/tmp/spdk-raid.sock 00:15:56.569 21:38:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:56.569 21:38:16 -- common/autotest_common.sh@829 -- # '[' -z 72189 ']' 00:15:56.569 21:38:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:56.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:56.569 21:38:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:56.569 21:38:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:56.569 21:38:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:56.569 21:38:16 -- common/autotest_common.sh@10 -- # set +x 00:15:56.569 [2024-12-06 21:38:16.878840] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:56.569 [2024-12-06 21:38:16.879107] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:56.569 [2024-12-06 21:38:17.052601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.828 [2024-12-06 21:38:17.227531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.087 [2024-12-06 21:38:17.403616] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:57.346 21:38:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:57.346 21:38:17 -- common/autotest_common.sh@862 -- # return 0 00:15:57.346 21:38:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:57.605 [2024-12-06 21:38:17.995646] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:57.605 [2024-12-06 21:38:17.995717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:57.605 [2024-12-06 21:38:17.995731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:57.605 [2024-12-06 21:38:17.995745] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:57.605 [2024-12-06 21:38:17.995757] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:57.605 [2024-12-06 21:38:17.995770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.605 21:38:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.864 21:38:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:57.864 "name": "Existed_Raid", 00:15:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.864 "strip_size_kb": 64, 00:15:57.864 "state": "configuring", 00:15:57.864 "raid_level": "concat", 00:15:57.864 "superblock": false, 00:15:57.864 "num_base_bdevs": 3, 00:15:57.864 "num_base_bdevs_discovered": 0, 00:15:57.864 "num_base_bdevs_operational": 3, 00:15:57.864 "base_bdevs_list": [ 00:15:57.864 { 00:15:57.864 "name": "BaseBdev1", 00:15:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.864 "is_configured": false, 00:15:57.864 "data_offset": 0, 00:15:57.864 "data_size": 0 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "name": "BaseBdev2", 00:15:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.864 "is_configured": false, 00:15:57.864 "data_offset": 0, 00:15:57.864 "data_size": 0 00:15:57.864 }, 00:15:57.864 { 00:15:57.864 "name": "BaseBdev3", 00:15:57.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.864 "is_configured": false, 00:15:57.864 "data_offset": 0, 00:15:57.864 "data_size": 0 00:15:57.864 } 00:15:57.864 ] 00:15:57.864 }' 00:15:57.864 21:38:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:57.864 21:38:18 -- common/autotest_common.sh@10 -- # set +x 00:15:58.122 21:38:18 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:58.380 [2024-12-06 21:38:18.783768] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.380 [2024-12-06 21:38:18.783835] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:15:58.380 21:38:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:58.639 [2024-12-06 21:38:19.035870] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:58.639 [2024-12-06 21:38:19.035939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:58.639 [2024-12-06 21:38:19.035952] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:58.639 [2024-12-06 21:38:19.035968] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:58.639 [2024-12-06 21:38:19.035976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:58.639 [2024-12-06 21:38:19.035989] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:58.639 21:38:19 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:58.898 BaseBdev1 00:15:58.898 [2024-12-06 21:38:19.276753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.898 21:38:19 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:58.898 21:38:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:58.898 21:38:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:58.898 21:38:19 -- common/autotest_common.sh@899 -- # local i 00:15:58.898 21:38:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:58.898 21:38:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:58.899 21:38:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:59.157 21:38:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:59.416 [ 00:15:59.416 { 00:15:59.416 "name": "BaseBdev1", 00:15:59.416 "aliases": [ 00:15:59.416 "db8fd4e2-77e3-40fd-8621-b8972dfc1f32" 00:15:59.416 ], 00:15:59.416 "product_name": "Malloc disk", 00:15:59.416 "block_size": 512, 00:15:59.416 "num_blocks": 65536, 00:15:59.416 "uuid": "db8fd4e2-77e3-40fd-8621-b8972dfc1f32", 00:15:59.416 "assigned_rate_limits": { 00:15:59.416 "rw_ios_per_sec": 0, 00:15:59.416 "rw_mbytes_per_sec": 0, 00:15:59.416 "r_mbytes_per_sec": 0, 00:15:59.416 "w_mbytes_per_sec": 0 00:15:59.416 }, 00:15:59.416 "claimed": true, 00:15:59.416 "claim_type": "exclusive_write", 00:15:59.416 "zoned": false, 00:15:59.416 "supported_io_types": { 00:15:59.416 "read": true, 00:15:59.416 "write": true, 00:15:59.416 "unmap": true, 00:15:59.416 "write_zeroes": true, 00:15:59.416 "flush": true, 00:15:59.416 "reset": true, 00:15:59.416 "compare": false, 00:15:59.416 "compare_and_write": false, 00:15:59.416 "abort": true, 00:15:59.416 "nvme_admin": false, 00:15:59.416 "nvme_io": false 00:15:59.416 }, 00:15:59.416 "memory_domains": [ 00:15:59.416 { 00:15:59.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:59.416 "dma_device_type": 2 00:15:59.416 } 00:15:59.416 ], 00:15:59.416 "driver_specific": {} 00:15:59.416 } 00:15:59.416 ] 00:15:59.416 21:38:19 -- common/autotest_common.sh@905 -- # return 0 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:59.416 21:38:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:59.417 21:38:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:59.417 21:38:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:59.417 21:38:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:59.417 21:38:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.417 21:38:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.675 21:38:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:59.675 "name": "Existed_Raid", 00:15:59.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.675 "strip_size_kb": 64, 00:15:59.675 "state": "configuring", 00:15:59.675 "raid_level": "concat", 00:15:59.675 "superblock": false, 00:15:59.675 "num_base_bdevs": 3, 00:15:59.675 "num_base_bdevs_discovered": 1, 00:15:59.675 "num_base_bdevs_operational": 3, 00:15:59.675 "base_bdevs_list": [ 00:15:59.675 { 00:15:59.675 "name": "BaseBdev1", 00:15:59.675 "uuid": "db8fd4e2-77e3-40fd-8621-b8972dfc1f32", 00:15:59.675 "is_configured": true, 00:15:59.675 "data_offset": 0, 00:15:59.675 "data_size": 65536 00:15:59.675 }, 00:15:59.675 { 00:15:59.675 "name": "BaseBdev2", 00:15:59.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.675 "is_configured": false, 00:15:59.675 "data_offset": 0, 00:15:59.675 "data_size": 0 00:15:59.675 }, 00:15:59.675 { 00:15:59.675 "name": "BaseBdev3", 00:15:59.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.675 "is_configured": false, 00:15:59.675 "data_offset": 0, 00:15:59.675 "data_size": 0 00:15:59.675 } 00:15:59.675 ] 00:15:59.675 }' 00:15:59.675 21:38:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:59.676 21:38:19 -- common/autotest_common.sh@10 -- # set +x 00:15:59.934 21:38:20 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.934 [2024-12-06 21:38:20.425045] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.934 [2024-12-06 21:38:20.425097] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:00.194 [2024-12-06 21:38:20.625151] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.194 [2024-12-06 21:38:20.627096] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.194 [2024-12-06 21:38:20.627176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.194 [2024-12-06 21:38:20.627206] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.194 [2024-12-06 21:38:20.627221] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.194 21:38:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.454 21:38:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:00.454 "name": "Existed_Raid", 00:16:00.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.454 "strip_size_kb": 64, 00:16:00.454 "state": "configuring", 00:16:00.454 "raid_level": "concat", 00:16:00.454 "superblock": false, 00:16:00.454 "num_base_bdevs": 3, 00:16:00.454 "num_base_bdevs_discovered": 1, 00:16:00.454 "num_base_bdevs_operational": 3, 00:16:00.454 "base_bdevs_list": [ 00:16:00.454 { 00:16:00.454 "name": "BaseBdev1", 00:16:00.454 "uuid": "db8fd4e2-77e3-40fd-8621-b8972dfc1f32", 00:16:00.454 "is_configured": true, 00:16:00.454 "data_offset": 0, 00:16:00.454 "data_size": 65536 00:16:00.454 }, 00:16:00.454 { 00:16:00.454 "name": "BaseBdev2", 00:16:00.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.454 "is_configured": false, 00:16:00.454 "data_offset": 0, 00:16:00.454 "data_size": 0 00:16:00.454 }, 00:16:00.454 { 00:16:00.454 "name": "BaseBdev3", 00:16:00.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.454 "is_configured": false, 00:16:00.454 "data_offset": 0, 00:16:00.454 "data_size": 0 00:16:00.454 } 00:16:00.454 ] 00:16:00.454 }' 00:16:00.454 21:38:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:00.454 21:38:20 -- common/autotest_common.sh@10 -- # set +x 00:16:00.713 21:38:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:00.972 [2024-12-06 21:38:21.453879] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:00.972 BaseBdev2 00:16:01.231 21:38:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:01.231 21:38:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:01.231 21:38:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.231 21:38:21 -- common/autotest_common.sh@899 -- # local i 00:16:01.231 21:38:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.231 21:38:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.231 21:38:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.231 21:38:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.490 [ 00:16:01.490 { 00:16:01.490 "name": "BaseBdev2", 00:16:01.490 "aliases": [ 00:16:01.490 "40a13032-6a52-4edf-8050-1c1bb334623a" 00:16:01.490 ], 00:16:01.490 "product_name": "Malloc disk", 00:16:01.490 "block_size": 512, 00:16:01.490 "num_blocks": 65536, 00:16:01.490 "uuid": "40a13032-6a52-4edf-8050-1c1bb334623a", 00:16:01.490 "assigned_rate_limits": { 00:16:01.490 "rw_ios_per_sec": 0, 00:16:01.490 "rw_mbytes_per_sec": 0, 00:16:01.490 "r_mbytes_per_sec": 0, 00:16:01.490 "w_mbytes_per_sec": 0 00:16:01.490 }, 00:16:01.490 "claimed": true, 00:16:01.490 "claim_type": "exclusive_write", 00:16:01.490 "zoned": false, 00:16:01.490 "supported_io_types": { 00:16:01.490 "read": true, 00:16:01.490 "write": true, 00:16:01.490 "unmap": true, 00:16:01.490 "write_zeroes": true, 00:16:01.490 "flush": true, 00:16:01.490 "reset": true, 00:16:01.490 "compare": false, 00:16:01.490 "compare_and_write": false, 00:16:01.490 "abort": true, 00:16:01.490 "nvme_admin": false, 00:16:01.490 "nvme_io": false 00:16:01.490 }, 00:16:01.490 "memory_domains": [ 00:16:01.490 { 00:16:01.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.490 "dma_device_type": 2 00:16:01.490 } 00:16:01.490 ], 00:16:01.490 "driver_specific": {} 00:16:01.490 } 00:16:01.490 ] 00:16:01.490 21:38:21 -- common/autotest_common.sh@905 -- # return 0 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.490 21:38:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.748 21:38:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.748 "name": "Existed_Raid", 00:16:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.748 "strip_size_kb": 64, 00:16:01.748 "state": "configuring", 00:16:01.748 "raid_level": "concat", 00:16:01.748 "superblock": false, 00:16:01.748 "num_base_bdevs": 3, 00:16:01.748 "num_base_bdevs_discovered": 2, 00:16:01.748 "num_base_bdevs_operational": 3, 00:16:01.748 "base_bdevs_list": [ 00:16:01.748 { 00:16:01.748 "name": "BaseBdev1", 00:16:01.748 "uuid": "db8fd4e2-77e3-40fd-8621-b8972dfc1f32", 00:16:01.748 "is_configured": true, 00:16:01.748 "data_offset": 0, 00:16:01.748 "data_size": 65536 00:16:01.748 }, 00:16:01.748 { 00:16:01.748 "name": "BaseBdev2", 00:16:01.748 "uuid": "40a13032-6a52-4edf-8050-1c1bb334623a", 00:16:01.748 "is_configured": true, 00:16:01.748 "data_offset": 0, 00:16:01.748 "data_size": 65536 00:16:01.748 }, 00:16:01.748 { 00:16:01.748 "name": "BaseBdev3", 00:16:01.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.748 "is_configured": false, 00:16:01.748 "data_offset": 0, 00:16:01.748 "data_size": 0 00:16:01.748 } 00:16:01.748 ] 00:16:01.748 }' 00:16:01.748 21:38:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.749 21:38:22 -- common/autotest_common.sh@10 -- # set +x 00:16:02.007 21:38:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.266 [2024-12-06 21:38:22.739875] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.266 [2024-12-06 21:38:22.739938] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:02.266 [2024-12-06 21:38:22.739955] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:02.266 [2024-12-06 21:38:22.740077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:02.266 [2024-12-06 21:38:22.740495] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:02.266 [2024-12-06 21:38:22.740529] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:02.266 [2024-12-06 21:38:22.740792] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.266 BaseBdev3 00:16:02.266 21:38:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:02.266 21:38:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:02.266 21:38:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.266 21:38:22 -- common/autotest_common.sh@899 -- # local i 00:16:02.266 21:38:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.266 21:38:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.266 21:38:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.525 21:38:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.785 [ 00:16:02.785 { 00:16:02.785 "name": "BaseBdev3", 00:16:02.785 "aliases": [ 00:16:02.785 "e3a2c157-0b2a-4f17-b1c5-2c95842d8bf6" 00:16:02.785 ], 00:16:02.785 "product_name": "Malloc disk", 00:16:02.785 "block_size": 512, 00:16:02.785 "num_blocks": 65536, 00:16:02.785 "uuid": "e3a2c157-0b2a-4f17-b1c5-2c95842d8bf6", 00:16:02.785 "assigned_rate_limits": { 00:16:02.785 "rw_ios_per_sec": 0, 00:16:02.785 "rw_mbytes_per_sec": 0, 00:16:02.785 "r_mbytes_per_sec": 0, 00:16:02.785 "w_mbytes_per_sec": 0 00:16:02.785 }, 00:16:02.785 "claimed": true, 00:16:02.785 "claim_type": "exclusive_write", 00:16:02.785 "zoned": false, 00:16:02.785 "supported_io_types": { 00:16:02.785 "read": true, 00:16:02.785 "write": true, 00:16:02.785 "unmap": true, 00:16:02.785 "write_zeroes": true, 00:16:02.785 "flush": true, 00:16:02.785 "reset": true, 00:16:02.785 "compare": false, 00:16:02.785 "compare_and_write": false, 00:16:02.785 "abort": true, 00:16:02.785 "nvme_admin": false, 00:16:02.785 "nvme_io": false 00:16:02.785 }, 00:16:02.785 "memory_domains": [ 00:16:02.785 { 00:16:02.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.785 "dma_device_type": 2 00:16:02.785 } 00:16:02.785 ], 00:16:02.785 "driver_specific": {} 00:16:02.785 } 00:16:02.785 ] 00:16:02.785 21:38:23 -- common/autotest_common.sh@905 -- # return 0 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.785 21:38:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.046 21:38:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.046 "name": "Existed_Raid", 00:16:03.046 "uuid": "7f44c108-13e4-4f7c-af88-1b243afb6f5a", 00:16:03.046 "strip_size_kb": 64, 00:16:03.046 "state": "online", 00:16:03.046 "raid_level": "concat", 00:16:03.046 "superblock": false, 00:16:03.046 "num_base_bdevs": 3, 00:16:03.046 "num_base_bdevs_discovered": 3, 00:16:03.046 "num_base_bdevs_operational": 3, 00:16:03.046 "base_bdevs_list": [ 00:16:03.046 { 00:16:03.046 "name": "BaseBdev1", 00:16:03.046 "uuid": "db8fd4e2-77e3-40fd-8621-b8972dfc1f32", 00:16:03.046 "is_configured": true, 00:16:03.046 "data_offset": 0, 00:16:03.046 "data_size": 65536 00:16:03.046 }, 00:16:03.046 { 00:16:03.046 "name": "BaseBdev2", 00:16:03.047 "uuid": "40a13032-6a52-4edf-8050-1c1bb334623a", 00:16:03.047 "is_configured": true, 00:16:03.047 "data_offset": 0, 00:16:03.047 "data_size": 65536 00:16:03.047 }, 00:16:03.047 { 00:16:03.047 "name": "BaseBdev3", 00:16:03.047 "uuid": "e3a2c157-0b2a-4f17-b1c5-2c95842d8bf6", 00:16:03.047 "is_configured": true, 00:16:03.047 "data_offset": 0, 00:16:03.047 "data_size": 65536 00:16:03.047 } 00:16:03.047 ] 00:16:03.047 }' 00:16:03.047 21:38:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.047 21:38:23 -- common/autotest_common.sh@10 -- # set +x 00:16:03.305 21:38:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:03.565 [2024-12-06 21:38:23.932322] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.565 [2024-12-06 21:38:23.932536] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:03.565 [2024-12-06 21:38:23.932736] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.565 21:38:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.825 21:38:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.825 "name": "Existed_Raid", 00:16:03.825 "uuid": "7f44c108-13e4-4f7c-af88-1b243afb6f5a", 00:16:03.825 "strip_size_kb": 64, 00:16:03.825 "state": "offline", 00:16:03.825 "raid_level": "concat", 00:16:03.825 "superblock": false, 00:16:03.825 "num_base_bdevs": 3, 00:16:03.825 "num_base_bdevs_discovered": 2, 00:16:03.825 "num_base_bdevs_operational": 2, 00:16:03.825 "base_bdevs_list": [ 00:16:03.825 { 00:16:03.825 "name": null, 00:16:03.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.825 "is_configured": false, 00:16:03.825 "data_offset": 0, 00:16:03.825 "data_size": 65536 00:16:03.825 }, 00:16:03.825 { 00:16:03.825 "name": "BaseBdev2", 00:16:03.825 "uuid": "40a13032-6a52-4edf-8050-1c1bb334623a", 00:16:03.825 "is_configured": true, 00:16:03.825 "data_offset": 0, 00:16:03.825 "data_size": 65536 00:16:03.825 }, 00:16:03.825 { 00:16:03.825 "name": "BaseBdev3", 00:16:03.825 "uuid": "e3a2c157-0b2a-4f17-b1c5-2c95842d8bf6", 00:16:03.825 "is_configured": true, 00:16:03.825 "data_offset": 0, 00:16:03.825 "data_size": 65536 00:16:03.825 } 00:16:03.825 ] 00:16:03.825 }' 00:16:03.825 21:38:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.825 21:38:24 -- common/autotest_common.sh@10 -- # set +x 00:16:04.084 21:38:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:04.084 21:38:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:04.084 21:38:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.084 21:38:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:04.343 21:38:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:04.343 21:38:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.343 21:38:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:04.602 [2024-12-06 21:38:24.920876] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.602 21:38:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:04.602 21:38:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:04.602 21:38:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:04.602 21:38:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.860 21:38:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:04.860 21:38:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.860 21:38:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:05.118 [2024-12-06 21:38:25.442133] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:05.118 [2024-12-06 21:38:25.442199] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:05.118 21:38:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:05.118 21:38:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:05.118 21:38:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.118 21:38:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:05.377 21:38:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:05.377 21:38:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:05.377 21:38:25 -- bdev/bdev_raid.sh@287 -- # killprocess 72189 00:16:05.377 21:38:25 -- common/autotest_common.sh@936 -- # '[' -z 72189 ']' 00:16:05.377 21:38:25 -- common/autotest_common.sh@940 -- # kill -0 72189 00:16:05.377 21:38:25 -- common/autotest_common.sh@941 -- # uname 00:16:05.377 21:38:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:05.377 21:38:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72189 00:16:05.377 killing process with pid 72189 00:16:05.377 21:38:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:05.377 21:38:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:05.377 21:38:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72189' 00:16:05.377 21:38:25 -- common/autotest_common.sh@955 -- # kill 72189 00:16:05.377 [2024-12-06 21:38:25.777468] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:05.377 21:38:25 -- common/autotest_common.sh@960 -- # wait 72189 00:16:05.377 [2024-12-06 21:38:25.777622] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.315 ************************************ 00:16:06.315 END TEST raid_state_function_test 00:16:06.315 ************************************ 00:16:06.315 21:38:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:06.315 00:16:06.315 real 0m10.002s 00:16:06.315 user 0m16.585s 00:16:06.315 sys 0m1.469s 00:16:06.315 21:38:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:06.315 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.575 21:38:26 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:06.575 21:38:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:06.575 21:38:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.575 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.575 ************************************ 00:16:06.575 START TEST raid_state_function_test_sb 00:16:06.575 ************************************ 00:16:06.575 21:38:26 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 3 true 00:16:06.575 21:38:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:06.575 21:38:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:06.575 21:38:26 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:06.575 21:38:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:06.576 Process raid pid: 72529 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=72529 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 72529' 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 72529 /var/tmp/spdk-raid.sock 00:16:06.576 21:38:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:06.576 21:38:26 -- common/autotest_common.sh@829 -- # '[' -z 72529 ']' 00:16:06.576 21:38:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:06.576 21:38:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:06.576 21:38:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:06.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:06.576 21:38:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:06.576 21:38:26 -- common/autotest_common.sh@10 -- # set +x 00:16:06.576 [2024-12-06 21:38:26.926432] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:06.576 [2024-12-06 21:38:26.926840] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:06.835 [2024-12-06 21:38:27.098793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:06.835 [2024-12-06 21:38:27.270129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.094 [2024-12-06 21:38:27.439077] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:07.678 21:38:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:07.678 21:38:27 -- common/autotest_common.sh@862 -- # return 0 00:16:07.678 21:38:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:07.678 [2024-12-06 21:38:28.086030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:07.678 [2024-12-06 21:38:28.086101] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:07.678 [2024-12-06 21:38:28.086118] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:07.678 [2024-12-06 21:38:28.086132] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:07.679 [2024-12-06 21:38:28.086139] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:07.679 [2024-12-06 21:38:28.086150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.679 21:38:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.944 21:38:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:07.944 "name": "Existed_Raid", 00:16:07.944 "uuid": "09dcb87e-1b29-4bc7-8f80-58a4e79bf9be", 00:16:07.944 "strip_size_kb": 64, 00:16:07.944 "state": "configuring", 00:16:07.944 "raid_level": "concat", 00:16:07.944 "superblock": true, 00:16:07.944 "num_base_bdevs": 3, 00:16:07.944 "num_base_bdevs_discovered": 0, 00:16:07.944 "num_base_bdevs_operational": 3, 00:16:07.944 "base_bdevs_list": [ 00:16:07.944 { 00:16:07.944 "name": "BaseBdev1", 00:16:07.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.944 "is_configured": false, 00:16:07.944 "data_offset": 0, 00:16:07.944 "data_size": 0 00:16:07.944 }, 00:16:07.944 { 00:16:07.944 "name": "BaseBdev2", 00:16:07.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.944 "is_configured": false, 00:16:07.944 "data_offset": 0, 00:16:07.944 "data_size": 0 00:16:07.944 }, 00:16:07.944 { 00:16:07.944 "name": "BaseBdev3", 00:16:07.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.944 "is_configured": false, 00:16:07.944 "data_offset": 0, 00:16:07.944 "data_size": 0 00:16:07.944 } 00:16:07.944 ] 00:16:07.944 }' 00:16:07.944 21:38:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:07.944 21:38:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.202 21:38:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:08.460 [2024-12-06 21:38:28.758126] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:08.460 [2024-12-06 21:38:28.758183] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:08.460 21:38:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:08.718 [2024-12-06 21:38:29.018268] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:08.718 [2024-12-06 21:38:29.018343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:08.718 [2024-12-06 21:38:29.018357] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:08.718 [2024-12-06 21:38:29.018383] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:08.718 [2024-12-06 21:38:29.018391] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:08.718 [2024-12-06 21:38:29.018404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:08.718 21:38:29 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:08.976 [2024-12-06 21:38:29.241912] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:08.976 BaseBdev1 00:16:08.976 21:38:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:08.976 21:38:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:08.976 21:38:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:08.976 21:38:29 -- common/autotest_common.sh@899 -- # local i 00:16:08.976 21:38:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:08.976 21:38:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:08.976 21:38:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:08.976 21:38:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:09.273 [ 00:16:09.273 { 00:16:09.273 "name": "BaseBdev1", 00:16:09.273 "aliases": [ 00:16:09.273 "e510c819-7630-4bab-8ebe-5eb1f86905c8" 00:16:09.273 ], 00:16:09.273 "product_name": "Malloc disk", 00:16:09.273 "block_size": 512, 00:16:09.273 "num_blocks": 65536, 00:16:09.273 "uuid": "e510c819-7630-4bab-8ebe-5eb1f86905c8", 00:16:09.273 "assigned_rate_limits": { 00:16:09.273 "rw_ios_per_sec": 0, 00:16:09.273 "rw_mbytes_per_sec": 0, 00:16:09.273 "r_mbytes_per_sec": 0, 00:16:09.273 "w_mbytes_per_sec": 0 00:16:09.273 }, 00:16:09.273 "claimed": true, 00:16:09.273 "claim_type": "exclusive_write", 00:16:09.273 "zoned": false, 00:16:09.273 "supported_io_types": { 00:16:09.273 "read": true, 00:16:09.273 "write": true, 00:16:09.273 "unmap": true, 00:16:09.273 "write_zeroes": true, 00:16:09.273 "flush": true, 00:16:09.273 "reset": true, 00:16:09.273 "compare": false, 00:16:09.273 "compare_and_write": false, 00:16:09.273 "abort": true, 00:16:09.273 "nvme_admin": false, 00:16:09.273 "nvme_io": false 00:16:09.273 }, 00:16:09.273 "memory_domains": [ 00:16:09.273 { 00:16:09.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:09.273 "dma_device_type": 2 00:16:09.273 } 00:16:09.273 ], 00:16:09.273 "driver_specific": {} 00:16:09.273 } 00:16:09.273 ] 00:16:09.273 21:38:29 -- common/autotest_common.sh@905 -- # return 0 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.273 21:38:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.531 21:38:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.531 "name": "Existed_Raid", 00:16:09.531 "uuid": "c56d445e-e8f4-4e78-b06b-64e8284f137c", 00:16:09.531 "strip_size_kb": 64, 00:16:09.531 "state": "configuring", 00:16:09.531 "raid_level": "concat", 00:16:09.531 "superblock": true, 00:16:09.531 "num_base_bdevs": 3, 00:16:09.531 "num_base_bdevs_discovered": 1, 00:16:09.531 "num_base_bdevs_operational": 3, 00:16:09.531 "base_bdevs_list": [ 00:16:09.531 { 00:16:09.531 "name": "BaseBdev1", 00:16:09.531 "uuid": "e510c819-7630-4bab-8ebe-5eb1f86905c8", 00:16:09.531 "is_configured": true, 00:16:09.531 "data_offset": 2048, 00:16:09.531 "data_size": 63488 00:16:09.531 }, 00:16:09.531 { 00:16:09.531 "name": "BaseBdev2", 00:16:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.531 "is_configured": false, 00:16:09.531 "data_offset": 0, 00:16:09.531 "data_size": 0 00:16:09.531 }, 00:16:09.531 { 00:16:09.531 "name": "BaseBdev3", 00:16:09.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.531 "is_configured": false, 00:16:09.531 "data_offset": 0, 00:16:09.531 "data_size": 0 00:16:09.531 } 00:16:09.531 ] 00:16:09.531 }' 00:16:09.531 21:38:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.531 21:38:29 -- common/autotest_common.sh@10 -- # set +x 00:16:10.098 21:38:30 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:10.098 [2024-12-06 21:38:30.494303] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:10.098 [2024-12-06 21:38:30.494354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:10.098 21:38:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:10.098 21:38:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:10.357 21:38:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:10.616 BaseBdev1 00:16:10.616 21:38:31 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:10.616 21:38:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:10.616 21:38:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.616 21:38:31 -- common/autotest_common.sh@899 -- # local i 00:16:10.616 21:38:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.616 21:38:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.616 21:38:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.876 21:38:31 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:11.135 [ 00:16:11.136 { 00:16:11.136 "name": "BaseBdev1", 00:16:11.136 "aliases": [ 00:16:11.136 "954999d4-86e0-4c6a-91d9-63415e640a98" 00:16:11.136 ], 00:16:11.136 "product_name": "Malloc disk", 00:16:11.136 "block_size": 512, 00:16:11.136 "num_blocks": 65536, 00:16:11.136 "uuid": "954999d4-86e0-4c6a-91d9-63415e640a98", 00:16:11.136 "assigned_rate_limits": { 00:16:11.136 "rw_ios_per_sec": 0, 00:16:11.136 "rw_mbytes_per_sec": 0, 00:16:11.136 "r_mbytes_per_sec": 0, 00:16:11.136 "w_mbytes_per_sec": 0 00:16:11.136 }, 00:16:11.136 "claimed": false, 00:16:11.136 "zoned": false, 00:16:11.136 "supported_io_types": { 00:16:11.136 "read": true, 00:16:11.136 "write": true, 00:16:11.136 "unmap": true, 00:16:11.136 "write_zeroes": true, 00:16:11.136 "flush": true, 00:16:11.136 "reset": true, 00:16:11.136 "compare": false, 00:16:11.136 "compare_and_write": false, 00:16:11.136 "abort": true, 00:16:11.136 "nvme_admin": false, 00:16:11.136 "nvme_io": false 00:16:11.136 }, 00:16:11.136 "memory_domains": [ 00:16:11.136 { 00:16:11.136 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.136 "dma_device_type": 2 00:16:11.136 } 00:16:11.136 ], 00:16:11.136 "driver_specific": {} 00:16:11.136 } 00:16:11.136 ] 00:16:11.136 21:38:31 -- common/autotest_common.sh@905 -- # return 0 00:16:11.136 21:38:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:11.395 [2024-12-06 21:38:31.694627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:11.395 [2024-12-06 21:38:31.696624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:11.395 [2024-12-06 21:38:31.696867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:11.395 [2024-12-06 21:38:31.696893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:11.395 [2024-12-06 21:38:31.696911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.395 21:38:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.654 21:38:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:11.654 "name": "Existed_Raid", 00:16:11.654 "uuid": "9665b9a4-9718-403d-95c1-4c0663c9cb7c", 00:16:11.654 "strip_size_kb": 64, 00:16:11.654 "state": "configuring", 00:16:11.654 "raid_level": "concat", 00:16:11.654 "superblock": true, 00:16:11.654 "num_base_bdevs": 3, 00:16:11.654 "num_base_bdevs_discovered": 1, 00:16:11.654 "num_base_bdevs_operational": 3, 00:16:11.654 "base_bdevs_list": [ 00:16:11.654 { 00:16:11.654 "name": "BaseBdev1", 00:16:11.654 "uuid": "954999d4-86e0-4c6a-91d9-63415e640a98", 00:16:11.654 "is_configured": true, 00:16:11.654 "data_offset": 2048, 00:16:11.654 "data_size": 63488 00:16:11.654 }, 00:16:11.654 { 00:16:11.654 "name": "BaseBdev2", 00:16:11.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.654 "is_configured": false, 00:16:11.654 "data_offset": 0, 00:16:11.654 "data_size": 0 00:16:11.654 }, 00:16:11.654 { 00:16:11.654 "name": "BaseBdev3", 00:16:11.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.654 "is_configured": false, 00:16:11.654 "data_offset": 0, 00:16:11.654 "data_size": 0 00:16:11.654 } 00:16:11.654 ] 00:16:11.654 }' 00:16:11.654 21:38:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:11.654 21:38:31 -- common/autotest_common.sh@10 -- # set +x 00:16:11.914 21:38:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:12.173 [2024-12-06 21:38:32.462092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:12.173 BaseBdev2 00:16:12.173 21:38:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:12.173 21:38:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:12.173 21:38:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.173 21:38:32 -- common/autotest_common.sh@899 -- # local i 00:16:12.173 21:38:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.173 21:38:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.173 21:38:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.432 21:38:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:12.432 [ 00:16:12.432 { 00:16:12.432 "name": "BaseBdev2", 00:16:12.432 "aliases": [ 00:16:12.432 "c8da8541-41f0-42d0-861b-6640c3ac18e6" 00:16:12.432 ], 00:16:12.432 "product_name": "Malloc disk", 00:16:12.432 "block_size": 512, 00:16:12.432 "num_blocks": 65536, 00:16:12.432 "uuid": "c8da8541-41f0-42d0-861b-6640c3ac18e6", 00:16:12.432 "assigned_rate_limits": { 00:16:12.432 "rw_ios_per_sec": 0, 00:16:12.432 "rw_mbytes_per_sec": 0, 00:16:12.432 "r_mbytes_per_sec": 0, 00:16:12.432 "w_mbytes_per_sec": 0 00:16:12.432 }, 00:16:12.432 "claimed": true, 00:16:12.432 "claim_type": "exclusive_write", 00:16:12.432 "zoned": false, 00:16:12.432 "supported_io_types": { 00:16:12.432 "read": true, 00:16:12.432 "write": true, 00:16:12.432 "unmap": true, 00:16:12.432 "write_zeroes": true, 00:16:12.432 "flush": true, 00:16:12.432 "reset": true, 00:16:12.432 "compare": false, 00:16:12.432 "compare_and_write": false, 00:16:12.432 "abort": true, 00:16:12.432 "nvme_admin": false, 00:16:12.432 "nvme_io": false 00:16:12.432 }, 00:16:12.432 "memory_domains": [ 00:16:12.432 { 00:16:12.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.432 "dma_device_type": 2 00:16:12.432 } 00:16:12.432 ], 00:16:12.432 "driver_specific": {} 00:16:12.432 } 00:16:12.432 ] 00:16:12.432 21:38:32 -- common/autotest_common.sh@905 -- # return 0 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:12.432 21:38:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:12.433 21:38:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.433 21:38:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:12.692 21:38:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:12.692 "name": "Existed_Raid", 00:16:12.692 "uuid": "9665b9a4-9718-403d-95c1-4c0663c9cb7c", 00:16:12.692 "strip_size_kb": 64, 00:16:12.692 "state": "configuring", 00:16:12.692 "raid_level": "concat", 00:16:12.692 "superblock": true, 00:16:12.692 "num_base_bdevs": 3, 00:16:12.692 "num_base_bdevs_discovered": 2, 00:16:12.692 "num_base_bdevs_operational": 3, 00:16:12.692 "base_bdevs_list": [ 00:16:12.692 { 00:16:12.692 "name": "BaseBdev1", 00:16:12.692 "uuid": "954999d4-86e0-4c6a-91d9-63415e640a98", 00:16:12.692 "is_configured": true, 00:16:12.692 "data_offset": 2048, 00:16:12.692 "data_size": 63488 00:16:12.692 }, 00:16:12.692 { 00:16:12.692 "name": "BaseBdev2", 00:16:12.692 "uuid": "c8da8541-41f0-42d0-861b-6640c3ac18e6", 00:16:12.692 "is_configured": true, 00:16:12.692 "data_offset": 2048, 00:16:12.692 "data_size": 63488 00:16:12.692 }, 00:16:12.692 { 00:16:12.692 "name": "BaseBdev3", 00:16:12.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.692 "is_configured": false, 00:16:12.692 "data_offset": 0, 00:16:12.692 "data_size": 0 00:16:12.692 } 00:16:12.692 ] 00:16:12.692 }' 00:16:12.692 21:38:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:12.692 21:38:33 -- common/autotest_common.sh@10 -- # set +x 00:16:12.951 21:38:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.211 [2024-12-06 21:38:33.635754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.211 [2024-12-06 21:38:33.636281] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:13.211 [2024-12-06 21:38:33.636312] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:13.211 [2024-12-06 21:38:33.636448] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:13.211 [2024-12-06 21:38:33.636952] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:13.211 [2024-12-06 21:38:33.636984] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:13.211 [2024-12-06 21:38:33.637147] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.211 BaseBdev3 00:16:13.211 21:38:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:13.211 21:38:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:13.211 21:38:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:13.211 21:38:33 -- common/autotest_common.sh@899 -- # local i 00:16:13.211 21:38:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:13.211 21:38:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:13.211 21:38:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:13.470 21:38:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.730 [ 00:16:13.730 { 00:16:13.730 "name": "BaseBdev3", 00:16:13.730 "aliases": [ 00:16:13.730 "986b214b-be7e-465e-81ad-d4f66c28052a" 00:16:13.730 ], 00:16:13.730 "product_name": "Malloc disk", 00:16:13.730 "block_size": 512, 00:16:13.730 "num_blocks": 65536, 00:16:13.730 "uuid": "986b214b-be7e-465e-81ad-d4f66c28052a", 00:16:13.730 "assigned_rate_limits": { 00:16:13.730 "rw_ios_per_sec": 0, 00:16:13.730 "rw_mbytes_per_sec": 0, 00:16:13.730 "r_mbytes_per_sec": 0, 00:16:13.730 "w_mbytes_per_sec": 0 00:16:13.730 }, 00:16:13.730 "claimed": true, 00:16:13.730 "claim_type": "exclusive_write", 00:16:13.730 "zoned": false, 00:16:13.730 "supported_io_types": { 00:16:13.730 "read": true, 00:16:13.730 "write": true, 00:16:13.730 "unmap": true, 00:16:13.730 "write_zeroes": true, 00:16:13.730 "flush": true, 00:16:13.730 "reset": true, 00:16:13.730 "compare": false, 00:16:13.730 "compare_and_write": false, 00:16:13.730 "abort": true, 00:16:13.730 "nvme_admin": false, 00:16:13.730 "nvme_io": false 00:16:13.730 }, 00:16:13.730 "memory_domains": [ 00:16:13.730 { 00:16:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.730 "dma_device_type": 2 00:16:13.730 } 00:16:13.730 ], 00:16:13.730 "driver_specific": {} 00:16:13.730 } 00:16:13.730 ] 00:16:13.730 21:38:34 -- common/autotest_common.sh@905 -- # return 0 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.730 21:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.988 21:38:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:13.988 "name": "Existed_Raid", 00:16:13.988 "uuid": "9665b9a4-9718-403d-95c1-4c0663c9cb7c", 00:16:13.988 "strip_size_kb": 64, 00:16:13.988 "state": "online", 00:16:13.988 "raid_level": "concat", 00:16:13.988 "superblock": true, 00:16:13.988 "num_base_bdevs": 3, 00:16:13.988 "num_base_bdevs_discovered": 3, 00:16:13.988 "num_base_bdevs_operational": 3, 00:16:13.988 "base_bdevs_list": [ 00:16:13.988 { 00:16:13.988 "name": "BaseBdev1", 00:16:13.988 "uuid": "954999d4-86e0-4c6a-91d9-63415e640a98", 00:16:13.988 "is_configured": true, 00:16:13.988 "data_offset": 2048, 00:16:13.988 "data_size": 63488 00:16:13.988 }, 00:16:13.988 { 00:16:13.988 "name": "BaseBdev2", 00:16:13.988 "uuid": "c8da8541-41f0-42d0-861b-6640c3ac18e6", 00:16:13.988 "is_configured": true, 00:16:13.988 "data_offset": 2048, 00:16:13.988 "data_size": 63488 00:16:13.988 }, 00:16:13.988 { 00:16:13.988 "name": "BaseBdev3", 00:16:13.988 "uuid": "986b214b-be7e-465e-81ad-d4f66c28052a", 00:16:13.988 "is_configured": true, 00:16:13.988 "data_offset": 2048, 00:16:13.988 "data_size": 63488 00:16:13.988 } 00:16:13.988 ] 00:16:13.988 }' 00:16:13.988 21:38:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:13.988 21:38:34 -- common/autotest_common.sh@10 -- # set +x 00:16:14.247 21:38:34 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:14.505 [2024-12-06 21:38:34.852253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:14.505 [2024-12-06 21:38:34.852297] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:14.505 [2024-12-06 21:38:34.852355] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.505 21:38:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.763 21:38:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.763 "name": "Existed_Raid", 00:16:14.763 "uuid": "9665b9a4-9718-403d-95c1-4c0663c9cb7c", 00:16:14.763 "strip_size_kb": 64, 00:16:14.763 "state": "offline", 00:16:14.763 "raid_level": "concat", 00:16:14.763 "superblock": true, 00:16:14.763 "num_base_bdevs": 3, 00:16:14.763 "num_base_bdevs_discovered": 2, 00:16:14.763 "num_base_bdevs_operational": 2, 00:16:14.763 "base_bdevs_list": [ 00:16:14.763 { 00:16:14.763 "name": null, 00:16:14.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.763 "is_configured": false, 00:16:14.763 "data_offset": 2048, 00:16:14.763 "data_size": 63488 00:16:14.763 }, 00:16:14.763 { 00:16:14.763 "name": "BaseBdev2", 00:16:14.763 "uuid": "c8da8541-41f0-42d0-861b-6640c3ac18e6", 00:16:14.763 "is_configured": true, 00:16:14.763 "data_offset": 2048, 00:16:14.763 "data_size": 63488 00:16:14.763 }, 00:16:14.763 { 00:16:14.763 "name": "BaseBdev3", 00:16:14.763 "uuid": "986b214b-be7e-465e-81ad-d4f66c28052a", 00:16:14.763 "is_configured": true, 00:16:14.763 "data_offset": 2048, 00:16:14.763 "data_size": 63488 00:16:14.763 } 00:16:14.763 ] 00:16:14.763 }' 00:16:14.763 21:38:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.763 21:38:35 -- common/autotest_common.sh@10 -- # set +x 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.329 21:38:35 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:15.587 [2024-12-06 21:38:36.026601] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.844 21:38:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:15.844 21:38:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:15.844 21:38:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:15.844 21:38:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.102 21:38:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:16.102 21:38:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.102 21:38:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:16.102 [2024-12-06 21:38:36.551334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.102 [2024-12-06 21:38:36.551391] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:16.359 21:38:36 -- bdev/bdev_raid.sh@287 -- # killprocess 72529 00:16:16.359 21:38:36 -- common/autotest_common.sh@936 -- # '[' -z 72529 ']' 00:16:16.359 21:38:36 -- common/autotest_common.sh@940 -- # kill -0 72529 00:16:16.359 21:38:36 -- common/autotest_common.sh@941 -- # uname 00:16:16.359 21:38:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.617 21:38:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72529 00:16:16.617 killing process with pid 72529 00:16:16.617 21:38:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:16.617 21:38:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:16.617 21:38:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72529' 00:16:16.617 21:38:36 -- common/autotest_common.sh@955 -- # kill 72529 00:16:16.617 [2024-12-06 21:38:36.880192] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:16.617 21:38:36 -- common/autotest_common.sh@960 -- # wait 72529 00:16:16.617 [2024-12-06 21:38:36.880309] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:17.550 ************************************ 00:16:17.550 END TEST raid_state_function_test_sb 00:16:17.550 ************************************ 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:17.550 00:16:17.550 real 0m11.034s 00:16:17.550 user 0m18.394s 00:16:17.550 sys 0m1.630s 00:16:17.550 21:38:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:17.550 21:38:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:17.550 21:38:37 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:17.550 21:38:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:17.550 21:38:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.550 ************************************ 00:16:17.550 START TEST raid_superblock_test 00:16:17.550 ************************************ 00:16:17.550 21:38:37 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 3 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@357 -- # raid_pid=72882 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@358 -- # waitforlisten 72882 /var/tmp/spdk-raid.sock 00:16:17.550 21:38:37 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:17.550 21:38:37 -- common/autotest_common.sh@829 -- # '[' -z 72882 ']' 00:16:17.550 21:38:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:17.550 21:38:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.550 21:38:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:17.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:17.550 21:38:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.550 21:38:37 -- common/autotest_common.sh@10 -- # set +x 00:16:17.550 [2024-12-06 21:38:38.013567] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:17.550 [2024-12-06 21:38:38.013735] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72882 ] 00:16:17.808 [2024-12-06 21:38:38.182515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.066 [2024-12-06 21:38:38.349451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.066 [2024-12-06 21:38:38.503331] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:18.633 21:38:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:18.633 21:38:39 -- common/autotest_common.sh@862 -- # return 0 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:18.633 21:38:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:18.892 malloc1 00:16:18.892 21:38:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:19.151 [2024-12-06 21:38:39.439482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:19.151 [2024-12-06 21:38:39.439611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.151 [2024-12-06 21:38:39.439665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:19.151 [2024-12-06 21:38:39.439679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.151 [2024-12-06 21:38:39.442100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.151 [2024-12-06 21:38:39.442154] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:19.151 pt1 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.151 21:38:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:19.410 malloc2 00:16:19.410 21:38:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:19.669 [2024-12-06 21:38:39.926000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:19.669 [2024-12-06 21:38:39.926090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.669 [2024-12-06 21:38:39.926119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:19.669 [2024-12-06 21:38:39.926133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.669 [2024-12-06 21:38:39.928447] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.669 [2024-12-06 21:38:39.928517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:19.669 pt2 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:19.669 21:38:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:19.669 malloc3 00:16:19.928 21:38:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:19.928 [2024-12-06 21:38:40.412448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:19.928 [2024-12-06 21:38:40.412579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.928 [2024-12-06 21:38:40.412615] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:19.928 [2024-12-06 21:38:40.412629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.928 [2024-12-06 21:38:40.415123] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.928 [2024-12-06 21:38:40.415195] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:19.928 pt3 00:16:20.186 21:38:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:20.186 21:38:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:20.186 21:38:40 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:20.186 [2024-12-06 21:38:40.616593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:20.186 [2024-12-06 21:38:40.618474] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:20.186 [2024-12-06 21:38:40.618569] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:20.186 [2024-12-06 21:38:40.618798] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:16:20.186 [2024-12-06 21:38:40.618816] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.186 [2024-12-06 21:38:40.618941] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:20.186 [2024-12-06 21:38:40.619314] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:16:20.186 [2024-12-06 21:38:40.619333] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:16:20.187 [2024-12-06 21:38:40.619535] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.187 21:38:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.445 21:38:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.445 "name": "raid_bdev1", 00:16:20.445 "uuid": "21df789e-eb05-4f34-bdda-4384bf197635", 00:16:20.445 "strip_size_kb": 64, 00:16:20.445 "state": "online", 00:16:20.445 "raid_level": "concat", 00:16:20.445 "superblock": true, 00:16:20.445 "num_base_bdevs": 3, 00:16:20.445 "num_base_bdevs_discovered": 3, 00:16:20.445 "num_base_bdevs_operational": 3, 00:16:20.445 "base_bdevs_list": [ 00:16:20.445 { 00:16:20.445 "name": "pt1", 00:16:20.445 "uuid": "9c2def3f-d9a1-5b8b-b653-781c66978455", 00:16:20.445 "is_configured": true, 00:16:20.445 "data_offset": 2048, 00:16:20.445 "data_size": 63488 00:16:20.445 }, 00:16:20.445 { 00:16:20.445 "name": "pt2", 00:16:20.445 "uuid": "09fca662-8e72-579c-8d82-d2ab18eae4cf", 00:16:20.445 "is_configured": true, 00:16:20.445 "data_offset": 2048, 00:16:20.445 "data_size": 63488 00:16:20.445 }, 00:16:20.445 { 00:16:20.445 "name": "pt3", 00:16:20.445 "uuid": "83ceedc3-d45a-5c2e-92dd-c32d6f409e6d", 00:16:20.445 "is_configured": true, 00:16:20.445 "data_offset": 2048, 00:16:20.445 "data_size": 63488 00:16:20.445 } 00:16:20.445 ] 00:16:20.445 }' 00:16:20.445 21:38:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.445 21:38:40 -- common/autotest_common.sh@10 -- # set +x 00:16:20.703 21:38:41 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:20.703 21:38:41 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:20.964 [2024-12-06 21:38:41.309015] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.964 21:38:41 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=21df789e-eb05-4f34-bdda-4384bf197635 00:16:20.964 21:38:41 -- bdev/bdev_raid.sh@380 -- # '[' -z 21df789e-eb05-4f34-bdda-4384bf197635 ']' 00:16:20.964 21:38:41 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:21.226 [2024-12-06 21:38:41.516819] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.226 [2024-12-06 21:38:41.517026] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.226 [2024-12-06 21:38:41.517212] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.226 [2024-12-06 21:38:41.517413] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.226 [2024-12-06 21:38:41.517600] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:16:21.226 21:38:41 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.226 21:38:41 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.489 21:38:41 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:21.752 21:38:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:21.752 21:38:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:22.078 21:38:42 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:22.078 21:38:42 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:22.353 21:38:42 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:22.353 21:38:42 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.353 21:38:42 -- common/autotest_common.sh@650 -- # local es=0 00:16:22.353 21:38:42 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.353 21:38:42 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.353 21:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.353 21:38:42 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.353 21:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.353 21:38:42 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.353 21:38:42 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:22.353 21:38:42 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.353 21:38:42 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:22.353 21:38:42 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:22.615 [2024-12-06 21:38:42.877224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:22.615 [2024-12-06 21:38:42.879318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:22.615 [2024-12-06 21:38:42.879370] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:22.615 [2024-12-06 21:38:42.879429] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:22.615 [2024-12-06 21:38:42.879534] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:22.615 [2024-12-06 21:38:42.879567] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:22.615 [2024-12-06 21:38:42.879587] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:22.615 [2024-12-06 21:38:42.879616] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:16:22.615 request: 00:16:22.615 { 00:16:22.615 "name": "raid_bdev1", 00:16:22.615 "raid_level": "concat", 00:16:22.615 "base_bdevs": [ 00:16:22.615 "malloc1", 00:16:22.615 "malloc2", 00:16:22.615 "malloc3" 00:16:22.615 ], 00:16:22.615 "superblock": false, 00:16:22.615 "strip_size_kb": 64, 00:16:22.615 "method": "bdev_raid_create", 00:16:22.615 "req_id": 1 00:16:22.615 } 00:16:22.615 Got JSON-RPC error response 00:16:22.615 response: 00:16:22.615 { 00:16:22.615 "code": -17, 00:16:22.615 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:22.615 } 00:16:22.615 21:38:42 -- common/autotest_common.sh@653 -- # es=1 00:16:22.615 21:38:42 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:22.615 21:38:42 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:22.615 21:38:42 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:22.615 21:38:42 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.615 21:38:42 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:22.615 21:38:43 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:22.615 21:38:43 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:22.615 21:38:43 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:22.874 [2024-12-06 21:38:43.317291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:22.874 [2024-12-06 21:38:43.317599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.874 [2024-12-06 21:38:43.317687] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:22.874 [2024-12-06 21:38:43.317874] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.874 [2024-12-06 21:38:43.320671] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.874 [2024-12-06 21:38:43.320855] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:22.874 [2024-12-06 21:38:43.321090] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:22.874 pt1 00:16:22.874 [2024-12-06 21:38:43.321296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.874 21:38:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.133 21:38:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.133 "name": "raid_bdev1", 00:16:23.133 "uuid": "21df789e-eb05-4f34-bdda-4384bf197635", 00:16:23.133 "strip_size_kb": 64, 00:16:23.133 "state": "configuring", 00:16:23.133 "raid_level": "concat", 00:16:23.133 "superblock": true, 00:16:23.133 "num_base_bdevs": 3, 00:16:23.133 "num_base_bdevs_discovered": 1, 00:16:23.133 "num_base_bdevs_operational": 3, 00:16:23.133 "base_bdevs_list": [ 00:16:23.133 { 00:16:23.133 "name": "pt1", 00:16:23.133 "uuid": "9c2def3f-d9a1-5b8b-b653-781c66978455", 00:16:23.133 "is_configured": true, 00:16:23.133 "data_offset": 2048, 00:16:23.133 "data_size": 63488 00:16:23.133 }, 00:16:23.133 { 00:16:23.133 "name": null, 00:16:23.133 "uuid": "09fca662-8e72-579c-8d82-d2ab18eae4cf", 00:16:23.133 "is_configured": false, 00:16:23.133 "data_offset": 2048, 00:16:23.133 "data_size": 63488 00:16:23.133 }, 00:16:23.133 { 00:16:23.133 "name": null, 00:16:23.133 "uuid": "83ceedc3-d45a-5c2e-92dd-c32d6f409e6d", 00:16:23.133 "is_configured": false, 00:16:23.133 "data_offset": 2048, 00:16:23.133 "data_size": 63488 00:16:23.133 } 00:16:23.133 ] 00:16:23.133 }' 00:16:23.133 21:38:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.133 21:38:43 -- common/autotest_common.sh@10 -- # set +x 00:16:23.392 21:38:43 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:23.392 21:38:43 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.652 [2024-12-06 21:38:44.097712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.652 [2024-12-06 21:38:44.097797] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.652 [2024-12-06 21:38:44.097825] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:16:23.652 [2024-12-06 21:38:44.097840] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.652 [2024-12-06 21:38:44.098330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.652 [2024-12-06 21:38:44.098356] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.652 [2024-12-06 21:38:44.098443] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:23.652 [2024-12-06 21:38:44.098473] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.652 pt2 00:16:23.652 21:38:44 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:23.911 [2024-12-06 21:38:44.285774] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.911 21:38:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.170 21:38:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:24.170 "name": "raid_bdev1", 00:16:24.170 "uuid": "21df789e-eb05-4f34-bdda-4384bf197635", 00:16:24.170 "strip_size_kb": 64, 00:16:24.170 "state": "configuring", 00:16:24.170 "raid_level": "concat", 00:16:24.170 "superblock": true, 00:16:24.170 "num_base_bdevs": 3, 00:16:24.170 "num_base_bdevs_discovered": 1, 00:16:24.170 "num_base_bdevs_operational": 3, 00:16:24.170 "base_bdevs_list": [ 00:16:24.170 { 00:16:24.170 "name": "pt1", 00:16:24.170 "uuid": "9c2def3f-d9a1-5b8b-b653-781c66978455", 00:16:24.170 "is_configured": true, 00:16:24.170 "data_offset": 2048, 00:16:24.170 "data_size": 63488 00:16:24.170 }, 00:16:24.170 { 00:16:24.170 "name": null, 00:16:24.170 "uuid": "09fca662-8e72-579c-8d82-d2ab18eae4cf", 00:16:24.170 "is_configured": false, 00:16:24.170 "data_offset": 2048, 00:16:24.170 "data_size": 63488 00:16:24.170 }, 00:16:24.170 { 00:16:24.170 "name": null, 00:16:24.170 "uuid": "83ceedc3-d45a-5c2e-92dd-c32d6f409e6d", 00:16:24.170 "is_configured": false, 00:16:24.170 "data_offset": 2048, 00:16:24.170 "data_size": 63488 00:16:24.170 } 00:16:24.170 ] 00:16:24.170 }' 00:16:24.170 21:38:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:24.170 21:38:44 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 21:38:44 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:24.429 21:38:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.429 21:38:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.689 [2024-12-06 21:38:45.030069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.689 [2024-12-06 21:38:45.030159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.689 [2024-12-06 21:38:45.030189] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:24.689 [2024-12-06 21:38:45.030201] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.689 [2024-12-06 21:38:45.030703] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.689 [2024-12-06 21:38:45.030737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.689 [2024-12-06 21:38:45.030864] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:24.689 [2024-12-06 21:38:45.030905] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.689 pt2 00:16:24.689 21:38:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:24.689 21:38:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.689 21:38:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.947 [2024-12-06 21:38:45.230115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.947 [2024-12-06 21:38:45.230196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.947 [2024-12-06 21:38:45.230225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:16:24.947 [2024-12-06 21:38:45.230237] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.947 [2024-12-06 21:38:45.230770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.947 [2024-12-06 21:38:45.230793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.947 [2024-12-06 21:38:45.230922] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:24.947 [2024-12-06 21:38:45.230963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.947 [2024-12-06 21:38:45.231105] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:24.947 [2024-12-06 21:38:45.231119] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.947 [2024-12-06 21:38:45.231221] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:24.947 [2024-12-06 21:38:45.231602] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:24.947 [2024-12-06 21:38:45.231622] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:24.947 [2024-12-06 21:38:45.231790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.947 pt3 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.947 21:38:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.204 21:38:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:25.204 "name": "raid_bdev1", 00:16:25.204 "uuid": "21df789e-eb05-4f34-bdda-4384bf197635", 00:16:25.204 "strip_size_kb": 64, 00:16:25.204 "state": "online", 00:16:25.204 "raid_level": "concat", 00:16:25.204 "superblock": true, 00:16:25.204 "num_base_bdevs": 3, 00:16:25.204 "num_base_bdevs_discovered": 3, 00:16:25.204 "num_base_bdevs_operational": 3, 00:16:25.204 "base_bdevs_list": [ 00:16:25.204 { 00:16:25.204 "name": "pt1", 00:16:25.204 "uuid": "9c2def3f-d9a1-5b8b-b653-781c66978455", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 2048, 00:16:25.204 "data_size": 63488 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "name": "pt2", 00:16:25.204 "uuid": "09fca662-8e72-579c-8d82-d2ab18eae4cf", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 2048, 00:16:25.204 "data_size": 63488 00:16:25.204 }, 00:16:25.204 { 00:16:25.204 "name": "pt3", 00:16:25.204 "uuid": "83ceedc3-d45a-5c2e-92dd-c32d6f409e6d", 00:16:25.204 "is_configured": true, 00:16:25.204 "data_offset": 2048, 00:16:25.204 "data_size": 63488 00:16:25.204 } 00:16:25.204 ] 00:16:25.204 }' 00:16:25.204 21:38:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:25.204 21:38:45 -- common/autotest_common.sh@10 -- # set +x 00:16:25.461 21:38:45 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.461 21:38:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:25.719 [2024-12-06 21:38:45.982235] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.719 21:38:46 -- bdev/bdev_raid.sh@430 -- # '[' 21df789e-eb05-4f34-bdda-4384bf197635 '!=' 21df789e-eb05-4f34-bdda-4384bf197635 ']' 00:16:25.719 21:38:46 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:25.719 21:38:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:25.719 21:38:46 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:25.719 21:38:46 -- bdev/bdev_raid.sh@511 -- # killprocess 72882 00:16:25.719 21:38:46 -- common/autotest_common.sh@936 -- # '[' -z 72882 ']' 00:16:25.720 21:38:46 -- common/autotest_common.sh@940 -- # kill -0 72882 00:16:25.720 21:38:46 -- common/autotest_common.sh@941 -- # uname 00:16:25.720 21:38:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:25.720 21:38:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72882 00:16:25.720 21:38:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:25.720 killing process with pid 72882 00:16:25.720 21:38:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:25.720 21:38:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72882' 00:16:25.720 21:38:46 -- common/autotest_common.sh@955 -- # kill 72882 00:16:25.720 21:38:46 -- common/autotest_common.sh@960 -- # wait 72882 00:16:25.720 [2024-12-06 21:38:46.029840] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.720 [2024-12-06 21:38:46.030011] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.720 [2024-12-06 21:38:46.030130] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.720 [2024-12-06 21:38:46.030170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:25.977 [2024-12-06 21:38:46.250782] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:26.913 00:16:26.913 real 0m9.318s 00:16:26.913 user 0m15.310s 00:16:26.913 sys 0m1.302s 00:16:26.913 ************************************ 00:16:26.913 END TEST raid_superblock_test 00:16:26.913 ************************************ 00:16:26.913 21:38:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:26.913 21:38:47 -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:26.913 21:38:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:26.913 21:38:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:26.913 21:38:47 -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 ************************************ 00:16:26.913 START TEST raid_state_function_test 00:16:26.913 ************************************ 00:16:26.913 21:38:47 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 false 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@226 -- # raid_pid=73159 00:16:26.913 Process raid pid: 73159 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73159' 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73159 /var/tmp/spdk-raid.sock 00:16:26.913 21:38:47 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:26.913 21:38:47 -- common/autotest_common.sh@829 -- # '[' -z 73159 ']' 00:16:26.913 21:38:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.913 21:38:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.913 21:38:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.913 21:38:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.913 21:38:47 -- common/autotest_common.sh@10 -- # set +x 00:16:26.913 [2024-12-06 21:38:47.392830] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:26.913 [2024-12-06 21:38:47.392989] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.173 [2024-12-06 21:38:47.562039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.432 [2024-12-06 21:38:47.745179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.432 [2024-12-06 21:38:47.906905] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:28.001 21:38:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:28.001 21:38:48 -- common/autotest_common.sh@862 -- # return 0 00:16:28.001 21:38:48 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:28.261 [2024-12-06 21:38:48.620351] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:28.261 [2024-12-06 21:38:48.620427] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:28.261 [2024-12-06 21:38:48.620442] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:28.261 [2024-12-06 21:38:48.620468] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:28.261 [2024-12-06 21:38:48.620479] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:28.261 [2024-12-06 21:38:48.620492] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.261 21:38:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:28.520 21:38:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:28.520 "name": "Existed_Raid", 00:16:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.520 "strip_size_kb": 0, 00:16:28.520 "state": "configuring", 00:16:28.520 "raid_level": "raid1", 00:16:28.520 "superblock": false, 00:16:28.520 "num_base_bdevs": 3, 00:16:28.520 "num_base_bdevs_discovered": 0, 00:16:28.520 "num_base_bdevs_operational": 3, 00:16:28.520 "base_bdevs_list": [ 00:16:28.520 { 00:16:28.520 "name": "BaseBdev1", 00:16:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.520 "is_configured": false, 00:16:28.520 "data_offset": 0, 00:16:28.520 "data_size": 0 00:16:28.520 }, 00:16:28.520 { 00:16:28.520 "name": "BaseBdev2", 00:16:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.520 "is_configured": false, 00:16:28.520 "data_offset": 0, 00:16:28.520 "data_size": 0 00:16:28.520 }, 00:16:28.520 { 00:16:28.520 "name": "BaseBdev3", 00:16:28.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.520 "is_configured": false, 00:16:28.520 "data_offset": 0, 00:16:28.520 "data_size": 0 00:16:28.520 } 00:16:28.520 ] 00:16:28.520 }' 00:16:28.520 21:38:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:28.520 21:38:48 -- common/autotest_common.sh@10 -- # set +x 00:16:28.779 21:38:49 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:29.037 [2024-12-06 21:38:49.408444] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:29.037 [2024-12-06 21:38:49.408527] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:29.037 21:38:49 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:29.296 [2024-12-06 21:38:49.640582] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:29.296 [2024-12-06 21:38:49.640667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:29.296 [2024-12-06 21:38:49.640696] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:29.296 [2024-12-06 21:38:49.640712] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:29.296 [2024-12-06 21:38:49.640720] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:29.296 [2024-12-06 21:38:49.640732] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:29.296 21:38:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:29.556 [2024-12-06 21:38:49.904616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:29.556 BaseBdev1 00:16:29.556 21:38:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:29.556 21:38:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:29.556 21:38:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:29.556 21:38:49 -- common/autotest_common.sh@899 -- # local i 00:16:29.556 21:38:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:29.556 21:38:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:29.556 21:38:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:29.815 21:38:50 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.075 [ 00:16:30.075 { 00:16:30.075 "name": "BaseBdev1", 00:16:30.075 "aliases": [ 00:16:30.075 "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79" 00:16:30.075 ], 00:16:30.075 "product_name": "Malloc disk", 00:16:30.075 "block_size": 512, 00:16:30.075 "num_blocks": 65536, 00:16:30.075 "uuid": "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79", 00:16:30.075 "assigned_rate_limits": { 00:16:30.075 "rw_ios_per_sec": 0, 00:16:30.075 "rw_mbytes_per_sec": 0, 00:16:30.075 "r_mbytes_per_sec": 0, 00:16:30.075 "w_mbytes_per_sec": 0 00:16:30.075 }, 00:16:30.075 "claimed": true, 00:16:30.075 "claim_type": "exclusive_write", 00:16:30.075 "zoned": false, 00:16:30.075 "supported_io_types": { 00:16:30.075 "read": true, 00:16:30.075 "write": true, 00:16:30.075 "unmap": true, 00:16:30.075 "write_zeroes": true, 00:16:30.075 "flush": true, 00:16:30.075 "reset": true, 00:16:30.075 "compare": false, 00:16:30.075 "compare_and_write": false, 00:16:30.075 "abort": true, 00:16:30.075 "nvme_admin": false, 00:16:30.075 "nvme_io": false 00:16:30.075 }, 00:16:30.075 "memory_domains": [ 00:16:30.075 { 00:16:30.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.075 "dma_device_type": 2 00:16:30.075 } 00:16:30.075 ], 00:16:30.076 "driver_specific": {} 00:16:30.076 } 00:16:30.076 ] 00:16:30.076 21:38:50 -- common/autotest_common.sh@905 -- # return 0 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.076 21:38:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.336 21:38:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:30.336 "name": "Existed_Raid", 00:16:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.336 "strip_size_kb": 0, 00:16:30.336 "state": "configuring", 00:16:30.336 "raid_level": "raid1", 00:16:30.336 "superblock": false, 00:16:30.336 "num_base_bdevs": 3, 00:16:30.336 "num_base_bdevs_discovered": 1, 00:16:30.336 "num_base_bdevs_operational": 3, 00:16:30.336 "base_bdevs_list": [ 00:16:30.336 { 00:16:30.336 "name": "BaseBdev1", 00:16:30.336 "uuid": "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79", 00:16:30.336 "is_configured": true, 00:16:30.336 "data_offset": 0, 00:16:30.336 "data_size": 65536 00:16:30.336 }, 00:16:30.336 { 00:16:30.336 "name": "BaseBdev2", 00:16:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.336 "is_configured": false, 00:16:30.336 "data_offset": 0, 00:16:30.336 "data_size": 0 00:16:30.336 }, 00:16:30.336 { 00:16:30.336 "name": "BaseBdev3", 00:16:30.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.336 "is_configured": false, 00:16:30.336 "data_offset": 0, 00:16:30.336 "data_size": 0 00:16:30.336 } 00:16:30.336 ] 00:16:30.336 }' 00:16:30.336 21:38:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:30.336 21:38:50 -- common/autotest_common.sh@10 -- # set +x 00:16:30.595 21:38:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:30.855 [2024-12-06 21:38:51.137124] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.855 [2024-12-06 21:38:51.137211] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:30.855 21:38:51 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:30.855 21:38:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:31.113 [2024-12-06 21:38:51.397246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.113 [2024-12-06 21:38:51.399295] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.113 [2024-12-06 21:38:51.399357] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.113 [2024-12-06 21:38:51.399371] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:31.113 [2024-12-06 21:38:51.399384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.113 21:38:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.371 21:38:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:31.371 "name": "Existed_Raid", 00:16:31.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.371 "strip_size_kb": 0, 00:16:31.371 "state": "configuring", 00:16:31.371 "raid_level": "raid1", 00:16:31.371 "superblock": false, 00:16:31.371 "num_base_bdevs": 3, 00:16:31.371 "num_base_bdevs_discovered": 1, 00:16:31.371 "num_base_bdevs_operational": 3, 00:16:31.371 "base_bdevs_list": [ 00:16:31.371 { 00:16:31.371 "name": "BaseBdev1", 00:16:31.371 "uuid": "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79", 00:16:31.371 "is_configured": true, 00:16:31.371 "data_offset": 0, 00:16:31.371 "data_size": 65536 00:16:31.371 }, 00:16:31.371 { 00:16:31.371 "name": "BaseBdev2", 00:16:31.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.371 "is_configured": false, 00:16:31.371 "data_offset": 0, 00:16:31.371 "data_size": 0 00:16:31.371 }, 00:16:31.371 { 00:16:31.371 "name": "BaseBdev3", 00:16:31.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.371 "is_configured": false, 00:16:31.371 "data_offset": 0, 00:16:31.372 "data_size": 0 00:16:31.372 } 00:16:31.372 ] 00:16:31.372 }' 00:16:31.372 21:38:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:31.372 21:38:51 -- common/autotest_common.sh@10 -- # set +x 00:16:31.630 21:38:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:31.889 [2024-12-06 21:38:52.224702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.889 BaseBdev2 00:16:31.889 21:38:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:31.889 21:38:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:31.889 21:38:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:31.889 21:38:52 -- common/autotest_common.sh@899 -- # local i 00:16:31.889 21:38:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:31.889 21:38:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:31.889 21:38:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:32.148 21:38:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:32.406 [ 00:16:32.406 { 00:16:32.406 "name": "BaseBdev2", 00:16:32.406 "aliases": [ 00:16:32.406 "74906f33-1be6-472e-91b9-d4da7d2dd316" 00:16:32.406 ], 00:16:32.406 "product_name": "Malloc disk", 00:16:32.406 "block_size": 512, 00:16:32.406 "num_blocks": 65536, 00:16:32.406 "uuid": "74906f33-1be6-472e-91b9-d4da7d2dd316", 00:16:32.406 "assigned_rate_limits": { 00:16:32.406 "rw_ios_per_sec": 0, 00:16:32.406 "rw_mbytes_per_sec": 0, 00:16:32.406 "r_mbytes_per_sec": 0, 00:16:32.406 "w_mbytes_per_sec": 0 00:16:32.406 }, 00:16:32.406 "claimed": true, 00:16:32.406 "claim_type": "exclusive_write", 00:16:32.406 "zoned": false, 00:16:32.406 "supported_io_types": { 00:16:32.406 "read": true, 00:16:32.406 "write": true, 00:16:32.406 "unmap": true, 00:16:32.406 "write_zeroes": true, 00:16:32.406 "flush": true, 00:16:32.406 "reset": true, 00:16:32.406 "compare": false, 00:16:32.406 "compare_and_write": false, 00:16:32.406 "abort": true, 00:16:32.406 "nvme_admin": false, 00:16:32.406 "nvme_io": false 00:16:32.406 }, 00:16:32.406 "memory_domains": [ 00:16:32.406 { 00:16:32.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.406 "dma_device_type": 2 00:16:32.406 } 00:16:32.406 ], 00:16:32.406 "driver_specific": {} 00:16:32.406 } 00:16:32.406 ] 00:16:32.406 21:38:52 -- common/autotest_common.sh@905 -- # return 0 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.406 21:38:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.665 21:38:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.665 "name": "Existed_Raid", 00:16:32.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.665 "strip_size_kb": 0, 00:16:32.665 "state": "configuring", 00:16:32.665 "raid_level": "raid1", 00:16:32.665 "superblock": false, 00:16:32.665 "num_base_bdevs": 3, 00:16:32.665 "num_base_bdevs_discovered": 2, 00:16:32.665 "num_base_bdevs_operational": 3, 00:16:32.665 "base_bdevs_list": [ 00:16:32.665 { 00:16:32.665 "name": "BaseBdev1", 00:16:32.665 "uuid": "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79", 00:16:32.665 "is_configured": true, 00:16:32.665 "data_offset": 0, 00:16:32.665 "data_size": 65536 00:16:32.665 }, 00:16:32.665 { 00:16:32.665 "name": "BaseBdev2", 00:16:32.665 "uuid": "74906f33-1be6-472e-91b9-d4da7d2dd316", 00:16:32.665 "is_configured": true, 00:16:32.665 "data_offset": 0, 00:16:32.665 "data_size": 65536 00:16:32.665 }, 00:16:32.665 { 00:16:32.665 "name": "BaseBdev3", 00:16:32.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.665 "is_configured": false, 00:16:32.665 "data_offset": 0, 00:16:32.665 "data_size": 0 00:16:32.665 } 00:16:32.665 ] 00:16:32.665 }' 00:16:32.665 21:38:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.665 21:38:52 -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 21:38:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:33.183 [2024-12-06 21:38:53.570747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.183 [2024-12-06 21:38:53.570801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:16:33.183 [2024-12-06 21:38:53.570816] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:33.183 [2024-12-06 21:38:53.570924] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:33.183 [2024-12-06 21:38:53.571323] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:16:33.183 [2024-12-06 21:38:53.571350] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:16:33.183 [2024-12-06 21:38:53.571635] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.183 BaseBdev3 00:16:33.183 21:38:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:33.183 21:38:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:33.183 21:38:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:33.183 21:38:53 -- common/autotest_common.sh@899 -- # local i 00:16:33.183 21:38:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:33.183 21:38:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:33.183 21:38:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:33.443 21:38:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:33.701 [ 00:16:33.701 { 00:16:33.701 "name": "BaseBdev3", 00:16:33.701 "aliases": [ 00:16:33.701 "57a8e60c-1702-43d5-b2d9-7effdc14e078" 00:16:33.701 ], 00:16:33.701 "product_name": "Malloc disk", 00:16:33.701 "block_size": 512, 00:16:33.701 "num_blocks": 65536, 00:16:33.701 "uuid": "57a8e60c-1702-43d5-b2d9-7effdc14e078", 00:16:33.701 "assigned_rate_limits": { 00:16:33.701 "rw_ios_per_sec": 0, 00:16:33.701 "rw_mbytes_per_sec": 0, 00:16:33.701 "r_mbytes_per_sec": 0, 00:16:33.701 "w_mbytes_per_sec": 0 00:16:33.701 }, 00:16:33.701 "claimed": true, 00:16:33.701 "claim_type": "exclusive_write", 00:16:33.701 "zoned": false, 00:16:33.701 "supported_io_types": { 00:16:33.701 "read": true, 00:16:33.701 "write": true, 00:16:33.701 "unmap": true, 00:16:33.701 "write_zeroes": true, 00:16:33.701 "flush": true, 00:16:33.701 "reset": true, 00:16:33.701 "compare": false, 00:16:33.701 "compare_and_write": false, 00:16:33.701 "abort": true, 00:16:33.701 "nvme_admin": false, 00:16:33.701 "nvme_io": false 00:16:33.701 }, 00:16:33.701 "memory_domains": [ 00:16:33.701 { 00:16:33.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:33.701 "dma_device_type": 2 00:16:33.701 } 00:16:33.701 ], 00:16:33.701 "driver_specific": {} 00:16:33.701 } 00:16:33.701 ] 00:16:33.701 21:38:53 -- common/autotest_common.sh@905 -- # return 0 00:16:33.701 21:38:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.702 21:38:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:33.960 21:38:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.960 "name": "Existed_Raid", 00:16:33.960 "uuid": "1f56a49e-e8ba-4514-9b43-306a18e7a899", 00:16:33.960 "strip_size_kb": 0, 00:16:33.960 "state": "online", 00:16:33.960 "raid_level": "raid1", 00:16:33.960 "superblock": false, 00:16:33.960 "num_base_bdevs": 3, 00:16:33.960 "num_base_bdevs_discovered": 3, 00:16:33.960 "num_base_bdevs_operational": 3, 00:16:33.960 "base_bdevs_list": [ 00:16:33.960 { 00:16:33.960 "name": "BaseBdev1", 00:16:33.960 "uuid": "a67abbc4-9b86-4afa-bc6a-5802a9cf9c79", 00:16:33.960 "is_configured": true, 00:16:33.960 "data_offset": 0, 00:16:33.960 "data_size": 65536 00:16:33.960 }, 00:16:33.960 { 00:16:33.960 "name": "BaseBdev2", 00:16:33.960 "uuid": "74906f33-1be6-472e-91b9-d4da7d2dd316", 00:16:33.960 "is_configured": true, 00:16:33.960 "data_offset": 0, 00:16:33.960 "data_size": 65536 00:16:33.960 }, 00:16:33.960 { 00:16:33.960 "name": "BaseBdev3", 00:16:33.960 "uuid": "57a8e60c-1702-43d5-b2d9-7effdc14e078", 00:16:33.960 "is_configured": true, 00:16:33.960 "data_offset": 0, 00:16:33.960 "data_size": 65536 00:16:33.960 } 00:16:33.960 ] 00:16:33.960 }' 00:16:33.960 21:38:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.960 21:38:54 -- common/autotest_common.sh@10 -- # set +x 00:16:34.219 21:38:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:34.492 [2024-12-06 21:38:54.747202] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.492 21:38:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.762 21:38:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:34.762 "name": "Existed_Raid", 00:16:34.762 "uuid": "1f56a49e-e8ba-4514-9b43-306a18e7a899", 00:16:34.762 "strip_size_kb": 0, 00:16:34.762 "state": "online", 00:16:34.762 "raid_level": "raid1", 00:16:34.762 "superblock": false, 00:16:34.762 "num_base_bdevs": 3, 00:16:34.762 "num_base_bdevs_discovered": 2, 00:16:34.762 "num_base_bdevs_operational": 2, 00:16:34.762 "base_bdevs_list": [ 00:16:34.762 { 00:16:34.762 "name": null, 00:16:34.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.762 "is_configured": false, 00:16:34.762 "data_offset": 0, 00:16:34.762 "data_size": 65536 00:16:34.762 }, 00:16:34.762 { 00:16:34.762 "name": "BaseBdev2", 00:16:34.762 "uuid": "74906f33-1be6-472e-91b9-d4da7d2dd316", 00:16:34.762 "is_configured": true, 00:16:34.762 "data_offset": 0, 00:16:34.762 "data_size": 65536 00:16:34.762 }, 00:16:34.762 { 00:16:34.762 "name": "BaseBdev3", 00:16:34.762 "uuid": "57a8e60c-1702-43d5-b2d9-7effdc14e078", 00:16:34.762 "is_configured": true, 00:16:34.762 "data_offset": 0, 00:16:34.762 "data_size": 65536 00:16:34.762 } 00:16:34.762 ] 00:16:34.762 }' 00:16:34.762 21:38:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:34.762 21:38:55 -- common/autotest_common.sh@10 -- # set +x 00:16:35.020 21:38:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:35.020 21:38:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:35.020 21:38:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.020 21:38:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:35.278 21:38:55 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:35.278 21:38:55 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.278 21:38:55 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:35.537 [2024-12-06 21:38:55.831295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:35.537 21:38:55 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:35.537 21:38:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:35.537 21:38:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.537 21:38:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:35.795 21:38:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:35.795 21:38:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:35.795 21:38:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:36.053 [2024-12-06 21:38:56.310118] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:36.053 [2024-12-06 21:38:56.310152] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:36.053 [2024-12-06 21:38:56.310204] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.054 [2024-12-06 21:38:56.383140] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.054 [2024-12-06 21:38:56.383174] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:16:36.054 21:38:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:36.054 21:38:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:36.054 21:38:56 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:36.054 21:38:56 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:36.311 21:38:56 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:36.311 21:38:56 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:36.311 21:38:56 -- bdev/bdev_raid.sh@287 -- # killprocess 73159 00:16:36.311 21:38:56 -- common/autotest_common.sh@936 -- # '[' -z 73159 ']' 00:16:36.311 21:38:56 -- common/autotest_common.sh@940 -- # kill -0 73159 00:16:36.311 21:38:56 -- common/autotest_common.sh@941 -- # uname 00:16:36.311 21:38:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:36.311 21:38:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73159 00:16:36.311 killing process with pid 73159 00:16:36.311 21:38:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:36.311 21:38:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:36.312 21:38:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73159' 00:16:36.312 21:38:56 -- common/autotest_common.sh@955 -- # kill 73159 00:16:36.312 [2024-12-06 21:38:56.655994] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.312 21:38:56 -- common/autotest_common.sh@960 -- # wait 73159 00:16:36.312 [2024-12-06 21:38:56.656094] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.247 21:38:57 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:37.247 00:16:37.247 real 0m10.391s 00:16:37.247 user 0m17.320s 00:16:37.247 sys 0m1.461s 00:16:37.247 21:38:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:37.247 21:38:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.247 ************************************ 00:16:37.247 END TEST raid_state_function_test 00:16:37.247 ************************************ 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:37.506 21:38:57 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:37.506 21:38:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.506 21:38:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.506 ************************************ 00:16:37.506 START TEST raid_state_function_test_sb 00:16:37.506 ************************************ 00:16:37.506 21:38:57 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 3 true 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.506 Process raid pid: 73499 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:37.506 21:38:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=73499 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 73499' 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 73499 /var/tmp/spdk-raid.sock 00:16:37.507 21:38:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:37.507 21:38:57 -- common/autotest_common.sh@829 -- # '[' -z 73499 ']' 00:16:37.507 21:38:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.507 21:38:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:37.507 21:38:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.507 21:38:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:37.507 21:38:57 -- common/autotest_common.sh@10 -- # set +x 00:16:37.507 [2024-12-06 21:38:57.837526] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:37.507 [2024-12-06 21:38:57.837910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.787 [2024-12-06 21:38:58.006229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.787 [2024-12-06 21:38:58.174622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:38.045 [2024-12-06 21:38:58.334846] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.303 21:38:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:38.303 21:38:58 -- common/autotest_common.sh@862 -- # return 0 00:16:38.303 21:38:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.562 [2024-12-06 21:38:59.013191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.562 [2024-12-06 21:38:59.013464] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.562 [2024-12-06 21:38:59.013503] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.562 [2024-12-06 21:38:59.013522] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.562 [2024-12-06 21:38:59.013532] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.562 [2024-12-06 21:38:59.013547] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.562 21:38:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.820 21:38:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.820 "name": "Existed_Raid", 00:16:38.820 "uuid": "919beadd-a75c-468a-91a8-ad47379900e8", 00:16:38.820 "strip_size_kb": 0, 00:16:38.820 "state": "configuring", 00:16:38.820 "raid_level": "raid1", 00:16:38.820 "superblock": true, 00:16:38.820 "num_base_bdevs": 3, 00:16:38.820 "num_base_bdevs_discovered": 0, 00:16:38.820 "num_base_bdevs_operational": 3, 00:16:38.820 "base_bdevs_list": [ 00:16:38.820 { 00:16:38.820 "name": "BaseBdev1", 00:16:38.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.820 "is_configured": false, 00:16:38.820 "data_offset": 0, 00:16:38.820 "data_size": 0 00:16:38.820 }, 00:16:38.820 { 00:16:38.820 "name": "BaseBdev2", 00:16:38.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.820 "is_configured": false, 00:16:38.820 "data_offset": 0, 00:16:38.820 "data_size": 0 00:16:38.820 }, 00:16:38.820 { 00:16:38.820 "name": "BaseBdev3", 00:16:38.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.820 "is_configured": false, 00:16:38.820 "data_offset": 0, 00:16:38.820 "data_size": 0 00:16:38.820 } 00:16:38.820 ] 00:16:38.820 }' 00:16:38.820 21:38:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.820 21:38:59 -- common/autotest_common.sh@10 -- # set +x 00:16:39.387 21:38:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.387 [2024-12-06 21:38:59.821261] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.387 [2024-12-06 21:38:59.821326] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:16:39.387 21:38:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.645 [2024-12-06 21:39:00.073370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.645 [2024-12-06 21:39:00.073632] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.645 [2024-12-06 21:39:00.073657] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.645 [2024-12-06 21:39:00.073677] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.645 [2024-12-06 21:39:00.073686] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.645 [2024-12-06 21:39:00.073700] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.645 21:39:00 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:39.904 [2024-12-06 21:39:00.305103] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.904 BaseBdev1 00:16:39.904 21:39:00 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:39.904 21:39:00 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:39.904 21:39:00 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:39.904 21:39:00 -- common/autotest_common.sh@899 -- # local i 00:16:39.904 21:39:00 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:39.904 21:39:00 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:39.904 21:39:00 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.163 21:39:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.422 [ 00:16:40.422 { 00:16:40.422 "name": "BaseBdev1", 00:16:40.422 "aliases": [ 00:16:40.422 "52d991d2-beb7-4acd-9b1a-337b524f1b2c" 00:16:40.422 ], 00:16:40.422 "product_name": "Malloc disk", 00:16:40.422 "block_size": 512, 00:16:40.422 "num_blocks": 65536, 00:16:40.422 "uuid": "52d991d2-beb7-4acd-9b1a-337b524f1b2c", 00:16:40.422 "assigned_rate_limits": { 00:16:40.422 "rw_ios_per_sec": 0, 00:16:40.422 "rw_mbytes_per_sec": 0, 00:16:40.422 "r_mbytes_per_sec": 0, 00:16:40.422 "w_mbytes_per_sec": 0 00:16:40.422 }, 00:16:40.422 "claimed": true, 00:16:40.422 "claim_type": "exclusive_write", 00:16:40.422 "zoned": false, 00:16:40.422 "supported_io_types": { 00:16:40.422 "read": true, 00:16:40.422 "write": true, 00:16:40.422 "unmap": true, 00:16:40.422 "write_zeroes": true, 00:16:40.422 "flush": true, 00:16:40.422 "reset": true, 00:16:40.422 "compare": false, 00:16:40.422 "compare_and_write": false, 00:16:40.422 "abort": true, 00:16:40.422 "nvme_admin": false, 00:16:40.422 "nvme_io": false 00:16:40.422 }, 00:16:40.422 "memory_domains": [ 00:16:40.422 { 00:16:40.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.422 "dma_device_type": 2 00:16:40.422 } 00:16:40.422 ], 00:16:40.422 "driver_specific": {} 00:16:40.422 } 00:16:40.422 ] 00:16:40.422 21:39:00 -- common/autotest_common.sh@905 -- # return 0 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.422 21:39:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.682 21:39:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.682 "name": "Existed_Raid", 00:16:40.682 "uuid": "a6ae355d-d7fa-4ff2-8dfb-8486a6fa3b6e", 00:16:40.682 "strip_size_kb": 0, 00:16:40.682 "state": "configuring", 00:16:40.682 "raid_level": "raid1", 00:16:40.682 "superblock": true, 00:16:40.682 "num_base_bdevs": 3, 00:16:40.682 "num_base_bdevs_discovered": 1, 00:16:40.682 "num_base_bdevs_operational": 3, 00:16:40.682 "base_bdevs_list": [ 00:16:40.682 { 00:16:40.682 "name": "BaseBdev1", 00:16:40.682 "uuid": "52d991d2-beb7-4acd-9b1a-337b524f1b2c", 00:16:40.682 "is_configured": true, 00:16:40.682 "data_offset": 2048, 00:16:40.682 "data_size": 63488 00:16:40.682 }, 00:16:40.682 { 00:16:40.682 "name": "BaseBdev2", 00:16:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.682 "is_configured": false, 00:16:40.682 "data_offset": 0, 00:16:40.682 "data_size": 0 00:16:40.682 }, 00:16:40.682 { 00:16:40.682 "name": "BaseBdev3", 00:16:40.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.682 "is_configured": false, 00:16:40.682 "data_offset": 0, 00:16:40.682 "data_size": 0 00:16:40.682 } 00:16:40.682 ] 00:16:40.682 }' 00:16:40.682 21:39:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.682 21:39:00 -- common/autotest_common.sh@10 -- # set +x 00:16:40.941 21:39:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:41.199 [2024-12-06 21:39:01.473419] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.199 [2024-12-06 21:39:01.473504] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:16:41.199 21:39:01 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:41.199 21:39:01 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:41.457 21:39:01 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:41.715 BaseBdev1 00:16:41.715 21:39:02 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:41.715 21:39:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:41.715 21:39:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:41.715 21:39:02 -- common/autotest_common.sh@899 -- # local i 00:16:41.715 21:39:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:41.715 21:39:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:41.715 21:39:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:41.974 21:39:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:42.233 [ 00:16:42.233 { 00:16:42.233 "name": "BaseBdev1", 00:16:42.233 "aliases": [ 00:16:42.233 "9b5cf949-d4b4-4ba4-90fb-9b4a9b109b61" 00:16:42.233 ], 00:16:42.233 "product_name": "Malloc disk", 00:16:42.233 "block_size": 512, 00:16:42.233 "num_blocks": 65536, 00:16:42.233 "uuid": "9b5cf949-d4b4-4ba4-90fb-9b4a9b109b61", 00:16:42.233 "assigned_rate_limits": { 00:16:42.233 "rw_ios_per_sec": 0, 00:16:42.233 "rw_mbytes_per_sec": 0, 00:16:42.233 "r_mbytes_per_sec": 0, 00:16:42.233 "w_mbytes_per_sec": 0 00:16:42.233 }, 00:16:42.233 "claimed": false, 00:16:42.233 "zoned": false, 00:16:42.233 "supported_io_types": { 00:16:42.233 "read": true, 00:16:42.233 "write": true, 00:16:42.233 "unmap": true, 00:16:42.233 "write_zeroes": true, 00:16:42.233 "flush": true, 00:16:42.233 "reset": true, 00:16:42.233 "compare": false, 00:16:42.233 "compare_and_write": false, 00:16:42.233 "abort": true, 00:16:42.233 "nvme_admin": false, 00:16:42.233 "nvme_io": false 00:16:42.233 }, 00:16:42.233 "memory_domains": [ 00:16:42.233 { 00:16:42.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:42.233 "dma_device_type": 2 00:16:42.233 } 00:16:42.233 ], 00:16:42.233 "driver_specific": {} 00:16:42.233 } 00:16:42.233 ] 00:16:42.233 21:39:02 -- common/autotest_common.sh@905 -- # return 0 00:16:42.233 21:39:02 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:42.233 [2024-12-06 21:39:02.698872] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:42.233 [2024-12-06 21:39:02.701074] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:42.233 [2024-12-06 21:39:02.701303] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:42.233 [2024-12-06 21:39:02.701346] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:42.234 [2024-12-06 21:39:02.701366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.234 21:39:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.493 21:39:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.493 "name": "Existed_Raid", 00:16:42.493 "uuid": "6815b749-04d9-443a-aacf-a67d4aa37a25", 00:16:42.493 "strip_size_kb": 0, 00:16:42.493 "state": "configuring", 00:16:42.493 "raid_level": "raid1", 00:16:42.493 "superblock": true, 00:16:42.493 "num_base_bdevs": 3, 00:16:42.493 "num_base_bdevs_discovered": 1, 00:16:42.493 "num_base_bdevs_operational": 3, 00:16:42.493 "base_bdevs_list": [ 00:16:42.493 { 00:16:42.493 "name": "BaseBdev1", 00:16:42.493 "uuid": "9b5cf949-d4b4-4ba4-90fb-9b4a9b109b61", 00:16:42.493 "is_configured": true, 00:16:42.493 "data_offset": 2048, 00:16:42.493 "data_size": 63488 00:16:42.493 }, 00:16:42.493 { 00:16:42.493 "name": "BaseBdev2", 00:16:42.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.493 "is_configured": false, 00:16:42.493 "data_offset": 0, 00:16:42.493 "data_size": 0 00:16:42.493 }, 00:16:42.493 { 00:16:42.493 "name": "BaseBdev3", 00:16:42.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.493 "is_configured": false, 00:16:42.493 "data_offset": 0, 00:16:42.493 "data_size": 0 00:16:42.493 } 00:16:42.493 ] 00:16:42.493 }' 00:16:42.493 21:39:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.493 21:39:02 -- common/autotest_common.sh@10 -- # set +x 00:16:43.060 21:39:03 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.061 [2024-12-06 21:39:03.521070] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.061 BaseBdev2 00:16:43.061 21:39:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:43.061 21:39:03 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:43.061 21:39:03 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:43.061 21:39:03 -- common/autotest_common.sh@899 -- # local i 00:16:43.061 21:39:03 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:43.061 21:39:03 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:43.061 21:39:03 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.321 21:39:03 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.580 [ 00:16:43.580 { 00:16:43.580 "name": "BaseBdev2", 00:16:43.580 "aliases": [ 00:16:43.580 "e13ab8e0-2786-4d9b-9dd1-e733ec4d2292" 00:16:43.580 ], 00:16:43.580 "product_name": "Malloc disk", 00:16:43.580 "block_size": 512, 00:16:43.580 "num_blocks": 65536, 00:16:43.580 "uuid": "e13ab8e0-2786-4d9b-9dd1-e733ec4d2292", 00:16:43.580 "assigned_rate_limits": { 00:16:43.580 "rw_ios_per_sec": 0, 00:16:43.580 "rw_mbytes_per_sec": 0, 00:16:43.580 "r_mbytes_per_sec": 0, 00:16:43.580 "w_mbytes_per_sec": 0 00:16:43.580 }, 00:16:43.580 "claimed": true, 00:16:43.580 "claim_type": "exclusive_write", 00:16:43.581 "zoned": false, 00:16:43.581 "supported_io_types": { 00:16:43.581 "read": true, 00:16:43.581 "write": true, 00:16:43.581 "unmap": true, 00:16:43.581 "write_zeroes": true, 00:16:43.581 "flush": true, 00:16:43.581 "reset": true, 00:16:43.581 "compare": false, 00:16:43.581 "compare_and_write": false, 00:16:43.581 "abort": true, 00:16:43.581 "nvme_admin": false, 00:16:43.581 "nvme_io": false 00:16:43.581 }, 00:16:43.581 "memory_domains": [ 00:16:43.581 { 00:16:43.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.581 "dma_device_type": 2 00:16:43.581 } 00:16:43.581 ], 00:16:43.581 "driver_specific": {} 00:16:43.581 } 00:16:43.581 ] 00:16:43.581 21:39:03 -- common/autotest_common.sh@905 -- # return 0 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.581 21:39:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.840 21:39:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.840 "name": "Existed_Raid", 00:16:43.840 "uuid": "6815b749-04d9-443a-aacf-a67d4aa37a25", 00:16:43.840 "strip_size_kb": 0, 00:16:43.840 "state": "configuring", 00:16:43.840 "raid_level": "raid1", 00:16:43.840 "superblock": true, 00:16:43.840 "num_base_bdevs": 3, 00:16:43.840 "num_base_bdevs_discovered": 2, 00:16:43.840 "num_base_bdevs_operational": 3, 00:16:43.840 "base_bdevs_list": [ 00:16:43.840 { 00:16:43.840 "name": "BaseBdev1", 00:16:43.840 "uuid": "9b5cf949-d4b4-4ba4-90fb-9b4a9b109b61", 00:16:43.840 "is_configured": true, 00:16:43.840 "data_offset": 2048, 00:16:43.840 "data_size": 63488 00:16:43.840 }, 00:16:43.840 { 00:16:43.840 "name": "BaseBdev2", 00:16:43.840 "uuid": "e13ab8e0-2786-4d9b-9dd1-e733ec4d2292", 00:16:43.840 "is_configured": true, 00:16:43.840 "data_offset": 2048, 00:16:43.840 "data_size": 63488 00:16:43.840 }, 00:16:43.840 { 00:16:43.840 "name": "BaseBdev3", 00:16:43.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.840 "is_configured": false, 00:16:43.840 "data_offset": 0, 00:16:43.840 "data_size": 0 00:16:43.840 } 00:16:43.840 ] 00:16:43.840 }' 00:16:43.840 21:39:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.840 21:39:04 -- common/autotest_common.sh@10 -- # set +x 00:16:44.098 21:39:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.357 [2024-12-06 21:39:04.775794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.357 [2024-12-06 21:39:04.776285] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:16:44.357 [2024-12-06 21:39:04.776314] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:44.357 [2024-12-06 21:39:04.776454] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:44.357 [2024-12-06 21:39:04.776936] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:16:44.357 [2024-12-06 21:39:04.776953] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:16:44.357 [2024-12-06 21:39:04.777124] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.357 BaseBdev3 00:16:44.357 21:39:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:44.357 21:39:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:44.357 21:39:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:44.357 21:39:04 -- common/autotest_common.sh@899 -- # local i 00:16:44.357 21:39:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:44.357 21:39:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:44.357 21:39:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.616 21:39:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:44.880 [ 00:16:44.880 { 00:16:44.880 "name": "BaseBdev3", 00:16:44.880 "aliases": [ 00:16:44.880 "9c5bd30b-031b-4a76-809c-5e550d21bb94" 00:16:44.880 ], 00:16:44.880 "product_name": "Malloc disk", 00:16:44.880 "block_size": 512, 00:16:44.880 "num_blocks": 65536, 00:16:44.880 "uuid": "9c5bd30b-031b-4a76-809c-5e550d21bb94", 00:16:44.880 "assigned_rate_limits": { 00:16:44.880 "rw_ios_per_sec": 0, 00:16:44.880 "rw_mbytes_per_sec": 0, 00:16:44.880 "r_mbytes_per_sec": 0, 00:16:44.880 "w_mbytes_per_sec": 0 00:16:44.880 }, 00:16:44.880 "claimed": true, 00:16:44.880 "claim_type": "exclusive_write", 00:16:44.880 "zoned": false, 00:16:44.880 "supported_io_types": { 00:16:44.880 "read": true, 00:16:44.880 "write": true, 00:16:44.880 "unmap": true, 00:16:44.880 "write_zeroes": true, 00:16:44.880 "flush": true, 00:16:44.880 "reset": true, 00:16:44.880 "compare": false, 00:16:44.880 "compare_and_write": false, 00:16:44.880 "abort": true, 00:16:44.880 "nvme_admin": false, 00:16:44.880 "nvme_io": false 00:16:44.880 }, 00:16:44.880 "memory_domains": [ 00:16:44.880 { 00:16:44.880 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.880 "dma_device_type": 2 00:16:44.880 } 00:16:44.880 ], 00:16:44.880 "driver_specific": {} 00:16:44.880 } 00:16:44.880 ] 00:16:44.880 21:39:05 -- common/autotest_common.sh@905 -- # return 0 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:44.880 21:39:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.138 21:39:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.138 "name": "Existed_Raid", 00:16:45.138 "uuid": "6815b749-04d9-443a-aacf-a67d4aa37a25", 00:16:45.138 "strip_size_kb": 0, 00:16:45.138 "state": "online", 00:16:45.138 "raid_level": "raid1", 00:16:45.138 "superblock": true, 00:16:45.138 "num_base_bdevs": 3, 00:16:45.138 "num_base_bdevs_discovered": 3, 00:16:45.138 "num_base_bdevs_operational": 3, 00:16:45.138 "base_bdevs_list": [ 00:16:45.138 { 00:16:45.138 "name": "BaseBdev1", 00:16:45.138 "uuid": "9b5cf949-d4b4-4ba4-90fb-9b4a9b109b61", 00:16:45.138 "is_configured": true, 00:16:45.138 "data_offset": 2048, 00:16:45.138 "data_size": 63488 00:16:45.138 }, 00:16:45.138 { 00:16:45.138 "name": "BaseBdev2", 00:16:45.138 "uuid": "e13ab8e0-2786-4d9b-9dd1-e733ec4d2292", 00:16:45.138 "is_configured": true, 00:16:45.138 "data_offset": 2048, 00:16:45.138 "data_size": 63488 00:16:45.138 }, 00:16:45.138 { 00:16:45.138 "name": "BaseBdev3", 00:16:45.138 "uuid": "9c5bd30b-031b-4a76-809c-5e550d21bb94", 00:16:45.138 "is_configured": true, 00:16:45.138 "data_offset": 2048, 00:16:45.138 "data_size": 63488 00:16:45.138 } 00:16:45.138 ] 00:16:45.138 }' 00:16:45.138 21:39:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.138 21:39:05 -- common/autotest_common.sh@10 -- # set +x 00:16:45.396 21:39:05 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:45.654 [2024-12-06 21:39:06.024326] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.654 21:39:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.913 21:39:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.913 "name": "Existed_Raid", 00:16:45.913 "uuid": "6815b749-04d9-443a-aacf-a67d4aa37a25", 00:16:45.913 "strip_size_kb": 0, 00:16:45.913 "state": "online", 00:16:45.913 "raid_level": "raid1", 00:16:45.913 "superblock": true, 00:16:45.913 "num_base_bdevs": 3, 00:16:45.913 "num_base_bdevs_discovered": 2, 00:16:45.913 "num_base_bdevs_operational": 2, 00:16:45.913 "base_bdevs_list": [ 00:16:45.913 { 00:16:45.913 "name": null, 00:16:45.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.913 "is_configured": false, 00:16:45.913 "data_offset": 2048, 00:16:45.913 "data_size": 63488 00:16:45.913 }, 00:16:45.913 { 00:16:45.913 "name": "BaseBdev2", 00:16:45.913 "uuid": "e13ab8e0-2786-4d9b-9dd1-e733ec4d2292", 00:16:45.913 "is_configured": true, 00:16:45.913 "data_offset": 2048, 00:16:45.913 "data_size": 63488 00:16:45.913 }, 00:16:45.913 { 00:16:45.913 "name": "BaseBdev3", 00:16:45.913 "uuid": "9c5bd30b-031b-4a76-809c-5e550d21bb94", 00:16:45.913 "is_configured": true, 00:16:45.913 "data_offset": 2048, 00:16:45.913 "data_size": 63488 00:16:45.913 } 00:16:45.913 ] 00:16:45.913 }' 00:16:45.913 21:39:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.913 21:39:06 -- common/autotest_common.sh@10 -- # set +x 00:16:46.171 21:39:06 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:46.171 21:39:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:46.171 21:39:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.171 21:39:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:46.430 21:39:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:46.430 21:39:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:46.430 21:39:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:46.689 [2024-12-06 21:39:07.137340] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:46.948 21:39:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:46.948 21:39:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:46.948 21:39:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.948 21:39:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.208 21:39:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.208 21:39:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.208 21:39:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:47.208 [2024-12-06 21:39:07.700140] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:47.208 [2024-12-06 21:39:07.700195] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.208 [2024-12-06 21:39:07.700273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.468 [2024-12-06 21:39:07.771833] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.468 [2024-12-06 21:39:07.771873] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:16:47.468 21:39:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:47.468 21:39:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.468 21:39:07 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.468 21:39:07 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:47.728 21:39:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:47.728 21:39:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:47.728 21:39:08 -- bdev/bdev_raid.sh@287 -- # killprocess 73499 00:16:47.728 21:39:08 -- common/autotest_common.sh@936 -- # '[' -z 73499 ']' 00:16:47.728 21:39:08 -- common/autotest_common.sh@940 -- # kill -0 73499 00:16:47.728 21:39:08 -- common/autotest_common.sh@941 -- # uname 00:16:47.728 21:39:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:47.728 21:39:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73499 00:16:47.728 killing process with pid 73499 00:16:47.728 21:39:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:47.728 21:39:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:47.728 21:39:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73499' 00:16:47.728 21:39:08 -- common/autotest_common.sh@955 -- # kill 73499 00:16:47.728 [2024-12-06 21:39:08.042577] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.728 21:39:08 -- common/autotest_common.sh@960 -- # wait 73499 00:16:47.728 [2024-12-06 21:39:08.042684] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.667 ************************************ 00:16:48.667 END TEST raid_state_function_test_sb 00:16:48.667 ************************************ 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:48.667 00:16:48.667 real 0m11.296s 00:16:48.667 user 0m18.916s 00:16:48.667 sys 0m1.614s 00:16:48.667 21:39:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:48.667 21:39:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:16:48.667 21:39:09 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:48.667 21:39:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:48.667 21:39:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.667 ************************************ 00:16:48.667 START TEST raid_superblock_test 00:16:48.667 ************************************ 00:16:48.667 21:39:09 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 3 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:48.667 21:39:09 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:48.668 21:39:09 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:16:48.668 21:39:09 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:16:48.668 21:39:09 -- bdev/bdev_raid.sh@357 -- # raid_pid=73854 00:16:48.668 21:39:09 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:48.668 21:39:09 -- bdev/bdev_raid.sh@358 -- # waitforlisten 73854 /var/tmp/spdk-raid.sock 00:16:48.668 21:39:09 -- common/autotest_common.sh@829 -- # '[' -z 73854 ']' 00:16:48.668 21:39:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:48.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:48.668 21:39:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.668 21:39:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:48.668 21:39:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.668 21:39:09 -- common/autotest_common.sh@10 -- # set +x 00:16:48.927 [2024-12-06 21:39:09.190129] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.927 [2024-12-06 21:39:09.190275] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73854 ] 00:16:48.927 [2024-12-06 21:39:09.356421] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.186 [2024-12-06 21:39:09.522535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.186 [2024-12-06 21:39:09.676332] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.755 21:39:10 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.755 21:39:10 -- common/autotest_common.sh@862 -- # return 0 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:49.755 21:39:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:50.014 malloc1 00:16:50.014 21:39:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:50.014 [2024-12-06 21:39:10.502160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:50.014 [2024-12-06 21:39:10.502250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.014 [2024-12-06 21:39:10.502289] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:16:50.014 [2024-12-06 21:39:10.502305] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.014 [2024-12-06 21:39:10.505048] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.014 [2024-12-06 21:39:10.505238] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:50.014 pt1 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:50.273 malloc2 00:16:50.273 21:39:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.531 [2024-12-06 21:39:10.971228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.531 [2024-12-06 21:39:10.971531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.531 [2024-12-06 21:39:10.971577] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:16:50.531 [2024-12-06 21:39:10.971594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.531 [2024-12-06 21:39:10.974026] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.531 [2024-12-06 21:39:10.974065] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.531 pt2 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:50.531 21:39:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:50.790 malloc3 00:16:50.790 21:39:11 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:51.049 [2024-12-06 21:39:11.450223] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:51.049 [2024-12-06 21:39:11.450324] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.049 [2024-12-06 21:39:11.450359] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:16:51.049 [2024-12-06 21:39:11.450375] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.049 [2024-12-06 21:39:11.452898] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.049 pt3 00:16:51.049 [2024-12-06 21:39:11.453104] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:51.049 21:39:11 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:51.049 21:39:11 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:51.049 21:39:11 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:51.308 [2024-12-06 21:39:11.666332] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.308 [2024-12-06 21:39:11.668747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.308 [2024-12-06 21:39:11.668844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:51.308 [2024-12-06 21:39:11.669085] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:16:51.308 [2024-12-06 21:39:11.669113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.308 [2024-12-06 21:39:11.669241] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:16:51.308 [2024-12-06 21:39:11.669661] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:16:51.308 [2024-12-06 21:39:11.669687] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:16:51.308 [2024-12-06 21:39:11.669881] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.308 21:39:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.567 21:39:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:51.567 "name": "raid_bdev1", 00:16:51.567 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:51.567 "strip_size_kb": 0, 00:16:51.567 "state": "online", 00:16:51.567 "raid_level": "raid1", 00:16:51.567 "superblock": true, 00:16:51.567 "num_base_bdevs": 3, 00:16:51.567 "num_base_bdevs_discovered": 3, 00:16:51.567 "num_base_bdevs_operational": 3, 00:16:51.567 "base_bdevs_list": [ 00:16:51.567 { 00:16:51.567 "name": "pt1", 00:16:51.567 "uuid": "f6c456ea-9210-5659-8841-2ddec934a815", 00:16:51.567 "is_configured": true, 00:16:51.567 "data_offset": 2048, 00:16:51.567 "data_size": 63488 00:16:51.567 }, 00:16:51.567 { 00:16:51.567 "name": "pt2", 00:16:51.567 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:51.567 "is_configured": true, 00:16:51.567 "data_offset": 2048, 00:16:51.567 "data_size": 63488 00:16:51.567 }, 00:16:51.567 { 00:16:51.567 "name": "pt3", 00:16:51.567 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:51.567 "is_configured": true, 00:16:51.567 "data_offset": 2048, 00:16:51.567 "data_size": 63488 00:16:51.567 } 00:16:51.567 ] 00:16:51.567 }' 00:16:51.567 21:39:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:51.567 21:39:11 -- common/autotest_common.sh@10 -- # set +x 00:16:51.855 21:39:12 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:51.855 21:39:12 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:52.141 [2024-12-06 21:39:12.450724] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.141 21:39:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=b7195ff4-84db-4113-b6ca-85fb9a6f90bb 00:16:52.141 21:39:12 -- bdev/bdev_raid.sh@380 -- # '[' -z b7195ff4-84db-4113-b6ca-85fb9a6f90bb ']' 00:16:52.141 21:39:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:52.399 [2024-12-06 21:39:12.698544] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:52.399 [2024-12-06 21:39:12.698582] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.399 [2024-12-06 21:39:12.698687] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.399 [2024-12-06 21:39:12.698772] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:52.399 [2024-12-06 21:39:12.698791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:16:52.399 21:39:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.399 21:39:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:52.657 21:39:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:52.657 21:39:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:52.657 21:39:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.657 21:39:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:52.657 21:39:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.657 21:39:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:52.916 21:39:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:52.916 21:39:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:53.175 21:39:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:53.175 21:39:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:53.433 21:39:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:53.433 21:39:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.433 21:39:13 -- common/autotest_common.sh@650 -- # local es=0 00:16:53.433 21:39:13 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.433 21:39:13 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.433 21:39:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.433 21:39:13 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.433 21:39:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.433 21:39:13 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.433 21:39:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.433 21:39:13 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:53.433 21:39:13 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:53.433 21:39:13 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:53.693 [2024-12-06 21:39:14.010834] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:53.693 [2024-12-06 21:39:14.012929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:53.693 [2024-12-06 21:39:14.013003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:53.693 [2024-12-06 21:39:14.013061] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:53.693 [2024-12-06 21:39:14.013117] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:53.693 [2024-12-06 21:39:14.013147] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:53.693 [2024-12-06 21:39:14.013166] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:53.693 [2024-12-06 21:39:14.013179] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:16:53.693 request: 00:16:53.693 { 00:16:53.693 "name": "raid_bdev1", 00:16:53.693 "raid_level": "raid1", 00:16:53.693 "base_bdevs": [ 00:16:53.693 "malloc1", 00:16:53.693 "malloc2", 00:16:53.693 "malloc3" 00:16:53.693 ], 00:16:53.693 "superblock": false, 00:16:53.693 "method": "bdev_raid_create", 00:16:53.693 "req_id": 1 00:16:53.693 } 00:16:53.693 Got JSON-RPC error response 00:16:53.693 response: 00:16:53.693 { 00:16:53.693 "code": -17, 00:16:53.693 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:53.693 } 00:16:53.693 21:39:14 -- common/autotest_common.sh@653 -- # es=1 00:16:53.693 21:39:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.693 21:39:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.693 21:39:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.693 21:39:14 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:53.693 21:39:14 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.952 21:39:14 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:53.952 21:39:14 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:53.952 21:39:14 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:53.952 [2024-12-06 21:39:14.446913] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:53.952 [2024-12-06 21:39:14.447014] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.952 [2024-12-06 21:39:14.447040] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:16:53.952 [2024-12-06 21:39:14.447058] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.952 [2024-12-06 21:39:14.449610] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.952 [2024-12-06 21:39:14.449652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:54.212 [2024-12-06 21:39:14.449765] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:54.212 [2024-12-06 21:39:14.449888] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:54.212 pt1 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:54.212 "name": "raid_bdev1", 00:16:54.212 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:54.212 "strip_size_kb": 0, 00:16:54.212 "state": "configuring", 00:16:54.212 "raid_level": "raid1", 00:16:54.212 "superblock": true, 00:16:54.212 "num_base_bdevs": 3, 00:16:54.212 "num_base_bdevs_discovered": 1, 00:16:54.212 "num_base_bdevs_operational": 3, 00:16:54.212 "base_bdevs_list": [ 00:16:54.212 { 00:16:54.212 "name": "pt1", 00:16:54.212 "uuid": "f6c456ea-9210-5659-8841-2ddec934a815", 00:16:54.212 "is_configured": true, 00:16:54.212 "data_offset": 2048, 00:16:54.212 "data_size": 63488 00:16:54.212 }, 00:16:54.212 { 00:16:54.212 "name": null, 00:16:54.212 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:54.212 "is_configured": false, 00:16:54.212 "data_offset": 2048, 00:16:54.212 "data_size": 63488 00:16:54.212 }, 00:16:54.212 { 00:16:54.212 "name": null, 00:16:54.212 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:54.212 "is_configured": false, 00:16:54.212 "data_offset": 2048, 00:16:54.212 "data_size": 63488 00:16:54.212 } 00:16:54.212 ] 00:16:54.212 }' 00:16:54.212 21:39:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:54.212 21:39:14 -- common/autotest_common.sh@10 -- # set +x 00:16:54.472 21:39:14 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:54.472 21:39:14 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:54.731 [2024-12-06 21:39:15.155137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:54.731 [2024-12-06 21:39:15.155205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.731 [2024-12-06 21:39:15.155233] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:16:54.731 [2024-12-06 21:39:15.155251] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.731 [2024-12-06 21:39:15.155776] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.731 [2024-12-06 21:39:15.155805] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:54.731 [2024-12-06 21:39:15.155928] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:54.731 [2024-12-06 21:39:15.155966] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:54.731 pt2 00:16:54.731 21:39:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:54.989 [2024-12-06 21:39:15.407206] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.989 21:39:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.246 21:39:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.246 "name": "raid_bdev1", 00:16:55.246 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:55.246 "strip_size_kb": 0, 00:16:55.246 "state": "configuring", 00:16:55.246 "raid_level": "raid1", 00:16:55.246 "superblock": true, 00:16:55.246 "num_base_bdevs": 3, 00:16:55.246 "num_base_bdevs_discovered": 1, 00:16:55.246 "num_base_bdevs_operational": 3, 00:16:55.246 "base_bdevs_list": [ 00:16:55.246 { 00:16:55.246 "name": "pt1", 00:16:55.246 "uuid": "f6c456ea-9210-5659-8841-2ddec934a815", 00:16:55.246 "is_configured": true, 00:16:55.246 "data_offset": 2048, 00:16:55.246 "data_size": 63488 00:16:55.246 }, 00:16:55.246 { 00:16:55.246 "name": null, 00:16:55.246 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:55.246 "is_configured": false, 00:16:55.246 "data_offset": 2048, 00:16:55.246 "data_size": 63488 00:16:55.246 }, 00:16:55.246 { 00:16:55.246 "name": null, 00:16:55.246 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:55.246 "is_configured": false, 00:16:55.246 "data_offset": 2048, 00:16:55.246 "data_size": 63488 00:16:55.246 } 00:16:55.246 ] 00:16:55.246 }' 00:16:55.246 21:39:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.246 21:39:15 -- common/autotest_common.sh@10 -- # set +x 00:16:55.504 21:39:15 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:55.504 21:39:15 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:55.504 21:39:15 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:55.762 [2024-12-06 21:39:16.083372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:55.762 [2024-12-06 21:39:16.083468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.762 [2024-12-06 21:39:16.083500] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:16:55.762 [2024-12-06 21:39:16.083514] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.762 [2024-12-06 21:39:16.083956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.762 [2024-12-06 21:39:16.083979] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:55.762 [2024-12-06 21:39:16.084072] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:55.762 [2024-12-06 21:39:16.084098] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:55.762 pt2 00:16:55.762 21:39:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:55.762 21:39:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:55.762 21:39:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:56.020 [2024-12-06 21:39:16.331448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:56.020 [2024-12-06 21:39:16.331533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.020 [2024-12-06 21:39:16.331562] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:16:56.020 [2024-12-06 21:39:16.331576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.020 [2024-12-06 21:39:16.332071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.020 [2024-12-06 21:39:16.332101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:56.020 [2024-12-06 21:39:16.332211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:56.020 [2024-12-06 21:39:16.332237] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:56.020 [2024-12-06 21:39:16.332426] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:16:56.020 [2024-12-06 21:39:16.332443] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:56.020 [2024-12-06 21:39:16.332586] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:16:56.020 [2024-12-06 21:39:16.332970] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:16:56.020 [2024-12-06 21:39:16.332988] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:16:56.020 [2024-12-06 21:39:16.333166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:56.020 pt3 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.020 21:39:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.279 21:39:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.279 "name": "raid_bdev1", 00:16:56.279 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:56.279 "strip_size_kb": 0, 00:16:56.279 "state": "online", 00:16:56.279 "raid_level": "raid1", 00:16:56.279 "superblock": true, 00:16:56.279 "num_base_bdevs": 3, 00:16:56.279 "num_base_bdevs_discovered": 3, 00:16:56.279 "num_base_bdevs_operational": 3, 00:16:56.279 "base_bdevs_list": [ 00:16:56.279 { 00:16:56.279 "name": "pt1", 00:16:56.279 "uuid": "f6c456ea-9210-5659-8841-2ddec934a815", 00:16:56.279 "is_configured": true, 00:16:56.279 "data_offset": 2048, 00:16:56.279 "data_size": 63488 00:16:56.279 }, 00:16:56.279 { 00:16:56.279 "name": "pt2", 00:16:56.279 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:56.279 "is_configured": true, 00:16:56.279 "data_offset": 2048, 00:16:56.279 "data_size": 63488 00:16:56.279 }, 00:16:56.279 { 00:16:56.279 "name": "pt3", 00:16:56.279 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:56.279 "is_configured": true, 00:16:56.279 "data_offset": 2048, 00:16:56.279 "data_size": 63488 00:16:56.279 } 00:16:56.279 ] 00:16:56.279 }' 00:16:56.279 21:39:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.279 21:39:16 -- common/autotest_common.sh@10 -- # set +x 00:16:56.538 21:39:16 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:56.538 21:39:16 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:56.798 [2024-12-06 21:39:17.079921] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:56.798 21:39:17 -- bdev/bdev_raid.sh@430 -- # '[' b7195ff4-84db-4113-b6ca-85fb9a6f90bb '!=' b7195ff4-84db-4113-b6ca-85fb9a6f90bb ']' 00:16:56.798 21:39:17 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:16:56.798 21:39:17 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:56.798 21:39:17 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:56.798 21:39:17 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:56.798 [2024-12-06 21:39:17.287758] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.057 21:39:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:57.057 "name": "raid_bdev1", 00:16:57.057 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:57.057 "strip_size_kb": 0, 00:16:57.057 "state": "online", 00:16:57.057 "raid_level": "raid1", 00:16:57.058 "superblock": true, 00:16:57.058 "num_base_bdevs": 3, 00:16:57.058 "num_base_bdevs_discovered": 2, 00:16:57.058 "num_base_bdevs_operational": 2, 00:16:57.058 "base_bdevs_list": [ 00:16:57.058 { 00:16:57.058 "name": null, 00:16:57.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.058 "is_configured": false, 00:16:57.058 "data_offset": 2048, 00:16:57.058 "data_size": 63488 00:16:57.058 }, 00:16:57.058 { 00:16:57.058 "name": "pt2", 00:16:57.058 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:57.058 "is_configured": true, 00:16:57.058 "data_offset": 2048, 00:16:57.058 "data_size": 63488 00:16:57.058 }, 00:16:57.058 { 00:16:57.058 "name": "pt3", 00:16:57.058 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:57.058 "is_configured": true, 00:16:57.058 "data_offset": 2048, 00:16:57.058 "data_size": 63488 00:16:57.058 } 00:16:57.058 ] 00:16:57.058 }' 00:16:57.058 21:39:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:57.058 21:39:17 -- common/autotest_common.sh@10 -- # set +x 00:16:57.317 21:39:17 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:57.576 [2024-12-06 21:39:18.019914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:57.576 [2024-12-06 21:39:18.019952] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:57.576 [2024-12-06 21:39:18.020046] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:57.576 [2024-12-06 21:39:18.020122] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:57.576 [2024-12-06 21:39:18.020155] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:16:57.576 21:39:18 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:57.576 21:39:18 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:16:57.835 21:39:18 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:16:57.836 21:39:18 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:16:57.836 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:16:57.836 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:57.836 21:39:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:58.095 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:58.095 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:58.095 21:39:18 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:58.354 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:16:58.354 21:39:18 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:16:58.354 21:39:18 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:16:58.354 21:39:18 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:58.354 21:39:18 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.614 [2024-12-06 21:39:18.860156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.614 [2024-12-06 21:39:18.860303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.614 [2024-12-06 21:39:18.860334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:16:58.614 [2024-12-06 21:39:18.860355] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.614 [2024-12-06 21:39:18.862995] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.614 [2024-12-06 21:39:18.863069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.614 [2024-12-06 21:39:18.863168] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:58.614 [2024-12-06 21:39:18.863240] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.614 pt2 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.614 21:39:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.873 21:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.873 "name": "raid_bdev1", 00:16:58.873 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:58.873 "strip_size_kb": 0, 00:16:58.873 "state": "configuring", 00:16:58.873 "raid_level": "raid1", 00:16:58.873 "superblock": true, 00:16:58.873 "num_base_bdevs": 3, 00:16:58.873 "num_base_bdevs_discovered": 1, 00:16:58.873 "num_base_bdevs_operational": 2, 00:16:58.873 "base_bdevs_list": [ 00:16:58.873 { 00:16:58.873 "name": null, 00:16:58.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.873 "is_configured": false, 00:16:58.873 "data_offset": 2048, 00:16:58.873 "data_size": 63488 00:16:58.873 }, 00:16:58.873 { 00:16:58.873 "name": "pt2", 00:16:58.873 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:58.873 "is_configured": true, 00:16:58.873 "data_offset": 2048, 00:16:58.873 "data_size": 63488 00:16:58.873 }, 00:16:58.873 { 00:16:58.873 "name": null, 00:16:58.873 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:58.873 "is_configured": false, 00:16:58.873 "data_offset": 2048, 00:16:58.873 "data_size": 63488 00:16:58.873 } 00:16:58.873 ] 00:16:58.873 }' 00:16:58.874 21:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.874 21:39:19 -- common/autotest_common.sh@10 -- # set +x 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@462 -- # i=2 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:59.133 [2024-12-06 21:39:19.596404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:59.133 [2024-12-06 21:39:19.596498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.133 [2024-12-06 21:39:19.596530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:16:59.133 [2024-12-06 21:39:19.596548] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.133 [2024-12-06 21:39:19.597018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.133 [2024-12-06 21:39:19.597046] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:59.133 [2024-12-06 21:39:19.597156] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:59.133 [2024-12-06 21:39:19.597188] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:59.133 [2024-12-06 21:39:19.597308] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:16:59.133 [2024-12-06 21:39:19.597359] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.133 [2024-12-06 21:39:19.597460] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:16:59.133 [2024-12-06 21:39:19.597859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:16:59.133 [2024-12-06 21:39:19.597889] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:16:59.133 [2024-12-06 21:39:19.598044] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.133 pt3 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.133 21:39:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.392 21:39:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.392 "name": "raid_bdev1", 00:16:59.392 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:16:59.392 "strip_size_kb": 0, 00:16:59.392 "state": "online", 00:16:59.392 "raid_level": "raid1", 00:16:59.392 "superblock": true, 00:16:59.392 "num_base_bdevs": 3, 00:16:59.392 "num_base_bdevs_discovered": 2, 00:16:59.392 "num_base_bdevs_operational": 2, 00:16:59.392 "base_bdevs_list": [ 00:16:59.392 { 00:16:59.392 "name": null, 00:16:59.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.393 "is_configured": false, 00:16:59.393 "data_offset": 2048, 00:16:59.393 "data_size": 63488 00:16:59.393 }, 00:16:59.393 { 00:16:59.393 "name": "pt2", 00:16:59.393 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:16:59.393 "is_configured": true, 00:16:59.393 "data_offset": 2048, 00:16:59.393 "data_size": 63488 00:16:59.393 }, 00:16:59.393 { 00:16:59.393 "name": "pt3", 00:16:59.393 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:16:59.393 "is_configured": true, 00:16:59.393 "data_offset": 2048, 00:16:59.393 "data_size": 63488 00:16:59.393 } 00:16:59.393 ] 00:16:59.393 }' 00:16:59.393 21:39:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.393 21:39:19 -- common/autotest_common.sh@10 -- # set +x 00:16:59.651 21:39:20 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:16:59.651 21:39:20 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:59.909 [2024-12-06 21:39:20.364582] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:59.909 [2024-12-06 21:39:20.364830] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.909 [2024-12-06 21:39:20.364918] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.909 [2024-12-06 21:39:20.364990] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:59.909 [2024-12-06 21:39:20.365005] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:16:59.909 21:39:20 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.910 21:39:20 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:00.168 21:39:20 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:00.168 21:39:20 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:00.168 21:39:20 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:00.425 [2024-12-06 21:39:20.768707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:00.425 [2024-12-06 21:39:20.768793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:00.425 [2024-12-06 21:39:20.768855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:00.425 [2024-12-06 21:39:20.768869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:00.425 [2024-12-06 21:39:20.771648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:00.425 [2024-12-06 21:39:20.771862] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:00.425 [2024-12-06 21:39:20.772130] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:00.425 [2024-12-06 21:39:20.772318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:00.425 pt1 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.425 21:39:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.683 21:39:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:00.683 "name": "raid_bdev1", 00:17:00.683 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:17:00.683 "strip_size_kb": 0, 00:17:00.683 "state": "configuring", 00:17:00.683 "raid_level": "raid1", 00:17:00.683 "superblock": true, 00:17:00.683 "num_base_bdevs": 3, 00:17:00.683 "num_base_bdevs_discovered": 1, 00:17:00.683 "num_base_bdevs_operational": 3, 00:17:00.683 "base_bdevs_list": [ 00:17:00.683 { 00:17:00.683 "name": "pt1", 00:17:00.683 "uuid": "f6c456ea-9210-5659-8841-2ddec934a815", 00:17:00.683 "is_configured": true, 00:17:00.683 "data_offset": 2048, 00:17:00.683 "data_size": 63488 00:17:00.683 }, 00:17:00.683 { 00:17:00.683 "name": null, 00:17:00.683 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:17:00.683 "is_configured": false, 00:17:00.683 "data_offset": 2048, 00:17:00.683 "data_size": 63488 00:17:00.683 }, 00:17:00.683 { 00:17:00.683 "name": null, 00:17:00.683 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:17:00.683 "is_configured": false, 00:17:00.683 "data_offset": 2048, 00:17:00.683 "data_size": 63488 00:17:00.683 } 00:17:00.683 ] 00:17:00.683 }' 00:17:00.683 21:39:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:00.683 21:39:20 -- common/autotest_common.sh@10 -- # set +x 00:17:00.941 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:00.941 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:00.941 21:39:21 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:01.198 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:01.198 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:01.198 21:39:21 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:01.457 [2024-12-06 21:39:21.881131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:01.457 [2024-12-06 21:39:21.881222] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.457 [2024-12-06 21:39:21.881252] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:17:01.457 [2024-12-06 21:39:21.881267] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.457 [2024-12-06 21:39:21.881826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.457 [2024-12-06 21:39:21.881867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:01.457 [2024-12-06 21:39:21.882014] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:01.457 [2024-12-06 21:39:21.882039] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:01.457 [2024-12-06 21:39:21.882066] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:01.457 [2024-12-06 21:39:21.882092] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:17:01.457 [2024-12-06 21:39:21.882164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:01.457 pt3 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.457 21:39:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.763 21:39:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:01.763 "name": "raid_bdev1", 00:17:01.763 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:17:01.763 "strip_size_kb": 0, 00:17:01.763 "state": "configuring", 00:17:01.763 "raid_level": "raid1", 00:17:01.763 "superblock": true, 00:17:01.763 "num_base_bdevs": 3, 00:17:01.763 "num_base_bdevs_discovered": 1, 00:17:01.763 "num_base_bdevs_operational": 2, 00:17:01.763 "base_bdevs_list": [ 00:17:01.763 { 00:17:01.763 "name": null, 00:17:01.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.763 "is_configured": false, 00:17:01.763 "data_offset": 2048, 00:17:01.763 "data_size": 63488 00:17:01.763 }, 00:17:01.763 { 00:17:01.763 "name": null, 00:17:01.763 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:17:01.763 "is_configured": false, 00:17:01.763 "data_offset": 2048, 00:17:01.763 "data_size": 63488 00:17:01.763 }, 00:17:01.763 { 00:17:01.763 "name": "pt3", 00:17:01.763 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:17:01.763 "is_configured": true, 00:17:01.763 "data_offset": 2048, 00:17:01.763 "data_size": 63488 00:17:01.763 } 00:17:01.763 ] 00:17:01.763 }' 00:17:01.763 21:39:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:01.763 21:39:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.021 21:39:22 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:02.021 21:39:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:02.021 21:39:22 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:02.280 [2024-12-06 21:39:22.665312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:02.280 [2024-12-06 21:39:22.665639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.280 [2024-12-06 21:39:22.665680] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:17:02.280 [2024-12-06 21:39:22.665700] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.280 [2024-12-06 21:39:22.666224] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.280 [2024-12-06 21:39:22.666258] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:02.280 [2024-12-06 21:39:22.666397] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:02.280 [2024-12-06 21:39:22.666428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:02.280 [2024-12-06 21:39:22.666567] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:17:02.280 [2024-12-06 21:39:22.666587] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:02.280 [2024-12-06 21:39:22.666713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:17:02.280 [2024-12-06 21:39:22.667074] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:17:02.280 [2024-12-06 21:39:22.667089] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:17:02.280 [2024-12-06 21:39:22.667273] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.280 pt2 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.280 21:39:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.539 21:39:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:02.539 "name": "raid_bdev1", 00:17:02.539 "uuid": "b7195ff4-84db-4113-b6ca-85fb9a6f90bb", 00:17:02.539 "strip_size_kb": 0, 00:17:02.539 "state": "online", 00:17:02.539 "raid_level": "raid1", 00:17:02.539 "superblock": true, 00:17:02.539 "num_base_bdevs": 3, 00:17:02.539 "num_base_bdevs_discovered": 2, 00:17:02.539 "num_base_bdevs_operational": 2, 00:17:02.539 "base_bdevs_list": [ 00:17:02.539 { 00:17:02.539 "name": null, 00:17:02.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.539 "is_configured": false, 00:17:02.539 "data_offset": 2048, 00:17:02.539 "data_size": 63488 00:17:02.539 }, 00:17:02.539 { 00:17:02.539 "name": "pt2", 00:17:02.539 "uuid": "47526e6c-5fe6-5669-bff5-667142377bf2", 00:17:02.539 "is_configured": true, 00:17:02.539 "data_offset": 2048, 00:17:02.539 "data_size": 63488 00:17:02.539 }, 00:17:02.539 { 00:17:02.539 "name": "pt3", 00:17:02.539 "uuid": "a3129928-7f79-515b-9add-cb4aac5e6b3e", 00:17:02.539 "is_configured": true, 00:17:02.539 "data_offset": 2048, 00:17:02.539 "data_size": 63488 00:17:02.539 } 00:17:02.539 ] 00:17:02.539 }' 00:17:02.539 21:39:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:02.539 21:39:22 -- common/autotest_common.sh@10 -- # set +x 00:17:02.797 21:39:23 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:02.797 21:39:23 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:03.055 [2024-12-06 21:39:23.433734] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.055 21:39:23 -- bdev/bdev_raid.sh@506 -- # '[' b7195ff4-84db-4113-b6ca-85fb9a6f90bb '!=' b7195ff4-84db-4113-b6ca-85fb9a6f90bb ']' 00:17:03.055 21:39:23 -- bdev/bdev_raid.sh@511 -- # killprocess 73854 00:17:03.055 21:39:23 -- common/autotest_common.sh@936 -- # '[' -z 73854 ']' 00:17:03.055 21:39:23 -- common/autotest_common.sh@940 -- # kill -0 73854 00:17:03.055 21:39:23 -- common/autotest_common.sh@941 -- # uname 00:17:03.055 21:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:03.055 21:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73854 00:17:03.055 21:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:03.055 21:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:03.055 killing process with pid 73854 00:17:03.055 21:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73854' 00:17:03.055 21:39:23 -- common/autotest_common.sh@955 -- # kill 73854 00:17:03.055 [2024-12-06 21:39:23.485722] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.055 [2024-12-06 21:39:23.485793] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.055 [2024-12-06 21:39:23.485891] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.055 [2024-12-06 21:39:23.485911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:17:03.055 21:39:23 -- common/autotest_common.sh@960 -- # wait 73854 00:17:03.313 [2024-12-06 21:39:23.703347] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.248 21:39:24 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:04.248 00:17:04.248 real 0m15.594s 00:17:04.248 user 0m26.909s 00:17:04.248 sys 0m2.296s 00:17:04.248 21:39:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:04.248 21:39:24 -- common/autotest_common.sh@10 -- # set +x 00:17:04.248 ************************************ 00:17:04.248 END TEST raid_superblock_test 00:17:04.248 ************************************ 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:04.507 21:39:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:04.507 21:39:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:04.507 21:39:24 -- common/autotest_common.sh@10 -- # set +x 00:17:04.507 ************************************ 00:17:04.507 START TEST raid_state_function_test 00:17:04.507 ************************************ 00:17:04.507 21:39:24 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 false 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.507 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@226 -- # raid_pid=74397 00:17:04.508 Process raid pid: 74397 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74397' 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:04.508 21:39:24 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74397 /var/tmp/spdk-raid.sock 00:17:04.508 21:39:24 -- common/autotest_common.sh@829 -- # '[' -z 74397 ']' 00:17:04.508 21:39:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:04.508 21:39:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:04.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:04.508 21:39:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:04.508 21:39:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:04.508 21:39:24 -- common/autotest_common.sh@10 -- # set +x 00:17:04.508 [2024-12-06 21:39:24.843795] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:04.508 [2024-12-06 21:39:24.843961] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.767 [2024-12-06 21:39:25.014660] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.767 [2024-12-06 21:39:25.186032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.025 [2024-12-06 21:39:25.355803] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.592 21:39:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.592 21:39:25 -- common/autotest_common.sh@862 -- # return 0 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:05.592 [2024-12-06 21:39:25.964868] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.592 [2024-12-06 21:39:25.964948] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.592 [2024-12-06 21:39:25.964979] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:05.592 [2024-12-06 21:39:25.964993] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:05.592 [2024-12-06 21:39:25.965002] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:05.592 [2024-12-06 21:39:25.965014] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:05.592 [2024-12-06 21:39:25.965023] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:05.592 [2024-12-06 21:39:25.965035] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.592 21:39:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.593 21:39:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.593 21:39:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.851 21:39:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.851 "name": "Existed_Raid", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.851 "strip_size_kb": 64, 00:17:05.851 "state": "configuring", 00:17:05.851 "raid_level": "raid0", 00:17:05.851 "superblock": false, 00:17:05.851 "num_base_bdevs": 4, 00:17:05.851 "num_base_bdevs_discovered": 0, 00:17:05.851 "num_base_bdevs_operational": 4, 00:17:05.851 "base_bdevs_list": [ 00:17:05.851 { 00:17:05.851 "name": "BaseBdev1", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.851 "is_configured": false, 00:17:05.851 "data_offset": 0, 00:17:05.851 "data_size": 0 00:17:05.851 }, 00:17:05.851 { 00:17:05.851 "name": "BaseBdev2", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.851 "is_configured": false, 00:17:05.851 "data_offset": 0, 00:17:05.851 "data_size": 0 00:17:05.851 }, 00:17:05.851 { 00:17:05.851 "name": "BaseBdev3", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.851 "is_configured": false, 00:17:05.851 "data_offset": 0, 00:17:05.851 "data_size": 0 00:17:05.851 }, 00:17:05.851 { 00:17:05.851 "name": "BaseBdev4", 00:17:05.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.851 "is_configured": false, 00:17:05.851 "data_offset": 0, 00:17:05.851 "data_size": 0 00:17:05.851 } 00:17:05.851 ] 00:17:05.851 }' 00:17:05.851 21:39:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.852 21:39:26 -- common/autotest_common.sh@10 -- # set +x 00:17:06.110 21:39:26 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.369 [2024-12-06 21:39:26.749021] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.369 [2024-12-06 21:39:26.749085] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:06.369 21:39:26 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:06.627 [2024-12-06 21:39:27.001138] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:06.627 [2024-12-06 21:39:27.001227] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:06.627 [2024-12-06 21:39:27.001240] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.627 [2024-12-06 21:39:27.001253] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.627 [2024-12-06 21:39:27.001261] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:06.627 [2024-12-06 21:39:27.001273] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:06.627 [2024-12-06 21:39:27.001281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:06.627 [2024-12-06 21:39:27.001293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:06.627 21:39:27 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.885 [2024-12-06 21:39:27.231219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.885 BaseBdev1 00:17:06.885 21:39:27 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:06.885 21:39:27 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:06.885 21:39:27 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:06.885 21:39:27 -- common/autotest_common.sh@899 -- # local i 00:17:06.885 21:39:27 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:06.885 21:39:27 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:06.885 21:39:27 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.144 21:39:27 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:07.144 [ 00:17:07.144 { 00:17:07.144 "name": "BaseBdev1", 00:17:07.144 "aliases": [ 00:17:07.144 "3fb0ab2a-7aba-438b-adb1-bbf686515677" 00:17:07.144 ], 00:17:07.144 "product_name": "Malloc disk", 00:17:07.144 "block_size": 512, 00:17:07.144 "num_blocks": 65536, 00:17:07.144 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:07.144 "assigned_rate_limits": { 00:17:07.144 "rw_ios_per_sec": 0, 00:17:07.144 "rw_mbytes_per_sec": 0, 00:17:07.144 "r_mbytes_per_sec": 0, 00:17:07.144 "w_mbytes_per_sec": 0 00:17:07.144 }, 00:17:07.144 "claimed": true, 00:17:07.144 "claim_type": "exclusive_write", 00:17:07.144 "zoned": false, 00:17:07.144 "supported_io_types": { 00:17:07.144 "read": true, 00:17:07.144 "write": true, 00:17:07.144 "unmap": true, 00:17:07.144 "write_zeroes": true, 00:17:07.144 "flush": true, 00:17:07.144 "reset": true, 00:17:07.144 "compare": false, 00:17:07.144 "compare_and_write": false, 00:17:07.144 "abort": true, 00:17:07.144 "nvme_admin": false, 00:17:07.144 "nvme_io": false 00:17:07.144 }, 00:17:07.144 "memory_domains": [ 00:17:07.144 { 00:17:07.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.144 "dma_device_type": 2 00:17:07.144 } 00:17:07.144 ], 00:17:07.144 "driver_specific": {} 00:17:07.144 } 00:17:07.144 ] 00:17:07.144 21:39:27 -- common/autotest_common.sh@905 -- # return 0 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.144 21:39:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.403 21:39:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.403 21:39:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:07.403 "name": "Existed_Raid", 00:17:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.403 "strip_size_kb": 64, 00:17:07.403 "state": "configuring", 00:17:07.403 "raid_level": "raid0", 00:17:07.403 "superblock": false, 00:17:07.403 "num_base_bdevs": 4, 00:17:07.403 "num_base_bdevs_discovered": 1, 00:17:07.403 "num_base_bdevs_operational": 4, 00:17:07.403 "base_bdevs_list": [ 00:17:07.403 { 00:17:07.403 "name": "BaseBdev1", 00:17:07.403 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:07.403 "is_configured": true, 00:17:07.403 "data_offset": 0, 00:17:07.403 "data_size": 65536 00:17:07.403 }, 00:17:07.403 { 00:17:07.403 "name": "BaseBdev2", 00:17:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.403 "is_configured": false, 00:17:07.403 "data_offset": 0, 00:17:07.403 "data_size": 0 00:17:07.403 }, 00:17:07.403 { 00:17:07.403 "name": "BaseBdev3", 00:17:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.403 "is_configured": false, 00:17:07.403 "data_offset": 0, 00:17:07.403 "data_size": 0 00:17:07.403 }, 00:17:07.403 { 00:17:07.403 "name": "BaseBdev4", 00:17:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.404 "is_configured": false, 00:17:07.404 "data_offset": 0, 00:17:07.404 "data_size": 0 00:17:07.404 } 00:17:07.404 ] 00:17:07.404 }' 00:17:07.404 21:39:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:07.404 21:39:27 -- common/autotest_common.sh@10 -- # set +x 00:17:07.662 21:39:28 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:07.921 [2024-12-06 21:39:28.347603] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:07.921 [2024-12-06 21:39:28.347663] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:07.921 21:39:28 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:07.921 21:39:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:08.180 [2024-12-06 21:39:28.591763] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.180 [2024-12-06 21:39:28.593664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.180 [2024-12-06 21:39:28.593730] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.180 [2024-12-06 21:39:28.593759] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.180 [2024-12-06 21:39:28.593773] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.180 [2024-12-06 21:39:28.593782] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:08.180 [2024-12-06 21:39:28.593797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.180 21:39:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.439 21:39:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.439 "name": "Existed_Raid", 00:17:08.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.439 "strip_size_kb": 64, 00:17:08.439 "state": "configuring", 00:17:08.439 "raid_level": "raid0", 00:17:08.439 "superblock": false, 00:17:08.439 "num_base_bdevs": 4, 00:17:08.439 "num_base_bdevs_discovered": 1, 00:17:08.439 "num_base_bdevs_operational": 4, 00:17:08.439 "base_bdevs_list": [ 00:17:08.439 { 00:17:08.439 "name": "BaseBdev1", 00:17:08.439 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:08.439 "is_configured": true, 00:17:08.439 "data_offset": 0, 00:17:08.439 "data_size": 65536 00:17:08.439 }, 00:17:08.439 { 00:17:08.439 "name": "BaseBdev2", 00:17:08.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.439 "is_configured": false, 00:17:08.439 "data_offset": 0, 00:17:08.439 "data_size": 0 00:17:08.439 }, 00:17:08.439 { 00:17:08.439 "name": "BaseBdev3", 00:17:08.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.439 "is_configured": false, 00:17:08.439 "data_offset": 0, 00:17:08.439 "data_size": 0 00:17:08.439 }, 00:17:08.439 { 00:17:08.439 "name": "BaseBdev4", 00:17:08.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.439 "is_configured": false, 00:17:08.439 "data_offset": 0, 00:17:08.439 "data_size": 0 00:17:08.439 } 00:17:08.439 ] 00:17:08.439 }' 00:17:08.439 21:39:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.439 21:39:28 -- common/autotest_common.sh@10 -- # set +x 00:17:08.698 21:39:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:08.958 [2024-12-06 21:39:29.377610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.958 BaseBdev2 00:17:08.958 21:39:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:08.958 21:39:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:08.958 21:39:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:08.958 21:39:29 -- common/autotest_common.sh@899 -- # local i 00:17:08.958 21:39:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:08.958 21:39:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:08.958 21:39:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:09.229 21:39:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.510 [ 00:17:09.510 { 00:17:09.510 "name": "BaseBdev2", 00:17:09.510 "aliases": [ 00:17:09.510 "be5f05ec-5c79-4ae1-8459-988afdac996d" 00:17:09.510 ], 00:17:09.510 "product_name": "Malloc disk", 00:17:09.510 "block_size": 512, 00:17:09.510 "num_blocks": 65536, 00:17:09.510 "uuid": "be5f05ec-5c79-4ae1-8459-988afdac996d", 00:17:09.510 "assigned_rate_limits": { 00:17:09.510 "rw_ios_per_sec": 0, 00:17:09.510 "rw_mbytes_per_sec": 0, 00:17:09.510 "r_mbytes_per_sec": 0, 00:17:09.510 "w_mbytes_per_sec": 0 00:17:09.510 }, 00:17:09.510 "claimed": true, 00:17:09.510 "claim_type": "exclusive_write", 00:17:09.510 "zoned": false, 00:17:09.510 "supported_io_types": { 00:17:09.510 "read": true, 00:17:09.510 "write": true, 00:17:09.510 "unmap": true, 00:17:09.510 "write_zeroes": true, 00:17:09.510 "flush": true, 00:17:09.510 "reset": true, 00:17:09.510 "compare": false, 00:17:09.510 "compare_and_write": false, 00:17:09.510 "abort": true, 00:17:09.510 "nvme_admin": false, 00:17:09.510 "nvme_io": false 00:17:09.510 }, 00:17:09.510 "memory_domains": [ 00:17:09.510 { 00:17:09.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.510 "dma_device_type": 2 00:17:09.510 } 00:17:09.510 ], 00:17:09.510 "driver_specific": {} 00:17:09.510 } 00:17:09.510 ] 00:17:09.510 21:39:29 -- common/autotest_common.sh@905 -- # return 0 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.510 21:39:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.778 21:39:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.778 "name": "Existed_Raid", 00:17:09.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.778 "strip_size_kb": 64, 00:17:09.778 "state": "configuring", 00:17:09.779 "raid_level": "raid0", 00:17:09.779 "superblock": false, 00:17:09.779 "num_base_bdevs": 4, 00:17:09.779 "num_base_bdevs_discovered": 2, 00:17:09.779 "num_base_bdevs_operational": 4, 00:17:09.779 "base_bdevs_list": [ 00:17:09.779 { 00:17:09.779 "name": "BaseBdev1", 00:17:09.779 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:09.779 "is_configured": true, 00:17:09.779 "data_offset": 0, 00:17:09.779 "data_size": 65536 00:17:09.779 }, 00:17:09.779 { 00:17:09.779 "name": "BaseBdev2", 00:17:09.779 "uuid": "be5f05ec-5c79-4ae1-8459-988afdac996d", 00:17:09.779 "is_configured": true, 00:17:09.779 "data_offset": 0, 00:17:09.779 "data_size": 65536 00:17:09.779 }, 00:17:09.779 { 00:17:09.779 "name": "BaseBdev3", 00:17:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.779 "is_configured": false, 00:17:09.779 "data_offset": 0, 00:17:09.779 "data_size": 0 00:17:09.779 }, 00:17:09.779 { 00:17:09.779 "name": "BaseBdev4", 00:17:09.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.779 "is_configured": false, 00:17:09.779 "data_offset": 0, 00:17:09.779 "data_size": 0 00:17:09.779 } 00:17:09.779 ] 00:17:09.779 }' 00:17:09.779 21:39:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.779 21:39:30 -- common/autotest_common.sh@10 -- # set +x 00:17:10.037 21:39:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.296 [2024-12-06 21:39:30.583906] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.296 BaseBdev3 00:17:10.296 21:39:30 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:10.296 21:39:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:10.296 21:39:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.296 21:39:30 -- common/autotest_common.sh@899 -- # local i 00:17:10.296 21:39:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.296 21:39:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.296 21:39:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:10.556 21:39:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.815 [ 00:17:10.815 { 00:17:10.815 "name": "BaseBdev3", 00:17:10.815 "aliases": [ 00:17:10.815 "43694d45-7f8f-4eb3-80b9-4f4e55bc2a19" 00:17:10.815 ], 00:17:10.815 "product_name": "Malloc disk", 00:17:10.815 "block_size": 512, 00:17:10.815 "num_blocks": 65536, 00:17:10.815 "uuid": "43694d45-7f8f-4eb3-80b9-4f4e55bc2a19", 00:17:10.815 "assigned_rate_limits": { 00:17:10.815 "rw_ios_per_sec": 0, 00:17:10.815 "rw_mbytes_per_sec": 0, 00:17:10.815 "r_mbytes_per_sec": 0, 00:17:10.815 "w_mbytes_per_sec": 0 00:17:10.815 }, 00:17:10.815 "claimed": true, 00:17:10.815 "claim_type": "exclusive_write", 00:17:10.815 "zoned": false, 00:17:10.815 "supported_io_types": { 00:17:10.815 "read": true, 00:17:10.815 "write": true, 00:17:10.815 "unmap": true, 00:17:10.815 "write_zeroes": true, 00:17:10.815 "flush": true, 00:17:10.815 "reset": true, 00:17:10.815 "compare": false, 00:17:10.815 "compare_and_write": false, 00:17:10.815 "abort": true, 00:17:10.815 "nvme_admin": false, 00:17:10.815 "nvme_io": false 00:17:10.815 }, 00:17:10.815 "memory_domains": [ 00:17:10.815 { 00:17:10.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.815 "dma_device_type": 2 00:17:10.815 } 00:17:10.815 ], 00:17:10.815 "driver_specific": {} 00:17:10.815 } 00:17:10.815 ] 00:17:10.815 21:39:31 -- common/autotest_common.sh@905 -- # return 0 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.815 21:39:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.075 21:39:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.075 "name": "Existed_Raid", 00:17:11.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.075 "strip_size_kb": 64, 00:17:11.075 "state": "configuring", 00:17:11.075 "raid_level": "raid0", 00:17:11.075 "superblock": false, 00:17:11.075 "num_base_bdevs": 4, 00:17:11.075 "num_base_bdevs_discovered": 3, 00:17:11.075 "num_base_bdevs_operational": 4, 00:17:11.075 "base_bdevs_list": [ 00:17:11.075 { 00:17:11.075 "name": "BaseBdev1", 00:17:11.075 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:11.075 "is_configured": true, 00:17:11.075 "data_offset": 0, 00:17:11.075 "data_size": 65536 00:17:11.075 }, 00:17:11.075 { 00:17:11.075 "name": "BaseBdev2", 00:17:11.075 "uuid": "be5f05ec-5c79-4ae1-8459-988afdac996d", 00:17:11.075 "is_configured": true, 00:17:11.075 "data_offset": 0, 00:17:11.075 "data_size": 65536 00:17:11.075 }, 00:17:11.075 { 00:17:11.075 "name": "BaseBdev3", 00:17:11.075 "uuid": "43694d45-7f8f-4eb3-80b9-4f4e55bc2a19", 00:17:11.075 "is_configured": true, 00:17:11.075 "data_offset": 0, 00:17:11.075 "data_size": 65536 00:17:11.075 }, 00:17:11.075 { 00:17:11.075 "name": "BaseBdev4", 00:17:11.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.075 "is_configured": false, 00:17:11.075 "data_offset": 0, 00:17:11.075 "data_size": 0 00:17:11.075 } 00:17:11.075 ] 00:17:11.075 }' 00:17:11.075 21:39:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.075 21:39:31 -- common/autotest_common.sh@10 -- # set +x 00:17:11.334 21:39:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:11.594 [2024-12-06 21:39:31.842909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:11.594 [2024-12-06 21:39:31.842978] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:11.594 [2024-12-06 21:39:31.842999] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:11.594 [2024-12-06 21:39:31.843125] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:11.594 [2024-12-06 21:39:31.843523] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:11.594 [2024-12-06 21:39:31.843561] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:11.594 [2024-12-06 21:39:31.843869] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.594 BaseBdev4 00:17:11.594 21:39:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:11.594 21:39:31 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:11.594 21:39:31 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:11.594 21:39:31 -- common/autotest_common.sh@899 -- # local i 00:17:11.594 21:39:31 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:11.594 21:39:31 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:11.594 21:39:31 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:11.594 21:39:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:11.853 [ 00:17:11.853 { 00:17:11.853 "name": "BaseBdev4", 00:17:11.853 "aliases": [ 00:17:11.853 "a7fe07ec-0f1d-451b-a930-922db8e84555" 00:17:11.853 ], 00:17:11.853 "product_name": "Malloc disk", 00:17:11.853 "block_size": 512, 00:17:11.853 "num_blocks": 65536, 00:17:11.853 "uuid": "a7fe07ec-0f1d-451b-a930-922db8e84555", 00:17:11.853 "assigned_rate_limits": { 00:17:11.853 "rw_ios_per_sec": 0, 00:17:11.853 "rw_mbytes_per_sec": 0, 00:17:11.853 "r_mbytes_per_sec": 0, 00:17:11.853 "w_mbytes_per_sec": 0 00:17:11.853 }, 00:17:11.853 "claimed": true, 00:17:11.853 "claim_type": "exclusive_write", 00:17:11.853 "zoned": false, 00:17:11.853 "supported_io_types": { 00:17:11.853 "read": true, 00:17:11.853 "write": true, 00:17:11.853 "unmap": true, 00:17:11.853 "write_zeroes": true, 00:17:11.853 "flush": true, 00:17:11.853 "reset": true, 00:17:11.853 "compare": false, 00:17:11.853 "compare_and_write": false, 00:17:11.853 "abort": true, 00:17:11.853 "nvme_admin": false, 00:17:11.853 "nvme_io": false 00:17:11.853 }, 00:17:11.853 "memory_domains": [ 00:17:11.853 { 00:17:11.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.853 "dma_device_type": 2 00:17:11.853 } 00:17:11.853 ], 00:17:11.853 "driver_specific": {} 00:17:11.853 } 00:17:11.853 ] 00:17:11.853 21:39:32 -- common/autotest_common.sh@905 -- # return 0 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.853 21:39:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.112 21:39:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.112 "name": "Existed_Raid", 00:17:12.112 "uuid": "8df7902a-9dbf-4afb-8a34-8eb7528a01d6", 00:17:12.112 "strip_size_kb": 64, 00:17:12.112 "state": "online", 00:17:12.112 "raid_level": "raid0", 00:17:12.112 "superblock": false, 00:17:12.112 "num_base_bdevs": 4, 00:17:12.112 "num_base_bdevs_discovered": 4, 00:17:12.112 "num_base_bdevs_operational": 4, 00:17:12.112 "base_bdevs_list": [ 00:17:12.112 { 00:17:12.112 "name": "BaseBdev1", 00:17:12.112 "uuid": "3fb0ab2a-7aba-438b-adb1-bbf686515677", 00:17:12.112 "is_configured": true, 00:17:12.112 "data_offset": 0, 00:17:12.112 "data_size": 65536 00:17:12.112 }, 00:17:12.112 { 00:17:12.112 "name": "BaseBdev2", 00:17:12.112 "uuid": "be5f05ec-5c79-4ae1-8459-988afdac996d", 00:17:12.112 "is_configured": true, 00:17:12.112 "data_offset": 0, 00:17:12.112 "data_size": 65536 00:17:12.112 }, 00:17:12.112 { 00:17:12.112 "name": "BaseBdev3", 00:17:12.112 "uuid": "43694d45-7f8f-4eb3-80b9-4f4e55bc2a19", 00:17:12.112 "is_configured": true, 00:17:12.112 "data_offset": 0, 00:17:12.112 "data_size": 65536 00:17:12.112 }, 00:17:12.112 { 00:17:12.112 "name": "BaseBdev4", 00:17:12.112 "uuid": "a7fe07ec-0f1d-451b-a930-922db8e84555", 00:17:12.112 "is_configured": true, 00:17:12.112 "data_offset": 0, 00:17:12.112 "data_size": 65536 00:17:12.112 } 00:17:12.112 ] 00:17:12.112 }' 00:17:12.112 21:39:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.112 21:39:32 -- common/autotest_common.sh@10 -- # set +x 00:17:12.370 21:39:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:12.628 [2024-12-06 21:39:32.995319] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:12.628 [2024-12-06 21:39:32.995355] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:12.628 [2024-12-06 21:39:32.995426] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.628 21:39:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.886 21:39:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.886 "name": "Existed_Raid", 00:17:12.886 "uuid": "8df7902a-9dbf-4afb-8a34-8eb7528a01d6", 00:17:12.886 "strip_size_kb": 64, 00:17:12.886 "state": "offline", 00:17:12.886 "raid_level": "raid0", 00:17:12.886 "superblock": false, 00:17:12.886 "num_base_bdevs": 4, 00:17:12.886 "num_base_bdevs_discovered": 3, 00:17:12.886 "num_base_bdevs_operational": 3, 00:17:12.886 "base_bdevs_list": [ 00:17:12.886 { 00:17:12.886 "name": null, 00:17:12.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.886 "is_configured": false, 00:17:12.886 "data_offset": 0, 00:17:12.886 "data_size": 65536 00:17:12.886 }, 00:17:12.886 { 00:17:12.886 "name": "BaseBdev2", 00:17:12.886 "uuid": "be5f05ec-5c79-4ae1-8459-988afdac996d", 00:17:12.886 "is_configured": true, 00:17:12.886 "data_offset": 0, 00:17:12.886 "data_size": 65536 00:17:12.886 }, 00:17:12.886 { 00:17:12.886 "name": "BaseBdev3", 00:17:12.886 "uuid": "43694d45-7f8f-4eb3-80b9-4f4e55bc2a19", 00:17:12.886 "is_configured": true, 00:17:12.886 "data_offset": 0, 00:17:12.886 "data_size": 65536 00:17:12.886 }, 00:17:12.886 { 00:17:12.886 "name": "BaseBdev4", 00:17:12.886 "uuid": "a7fe07ec-0f1d-451b-a930-922db8e84555", 00:17:12.886 "is_configured": true, 00:17:12.886 "data_offset": 0, 00:17:12.886 "data_size": 65536 00:17:12.886 } 00:17:12.886 ] 00:17:12.886 }' 00:17:12.886 21:39:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.886 21:39:33 -- common/autotest_common.sh@10 -- # set +x 00:17:13.144 21:39:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:13.144 21:39:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.144 21:39:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.144 21:39:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:13.403 21:39:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:13.403 21:39:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.403 21:39:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:13.661 [2024-12-06 21:39:34.012951] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:13.661 21:39:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:13.661 21:39:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:13.661 21:39:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.661 21:39:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:13.920 21:39:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:13.920 21:39:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:13.920 21:39:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:14.179 [2024-12-06 21:39:34.548169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:14.179 21:39:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:14.179 21:39:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.179 21:39:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.179 21:39:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:14.438 21:39:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:14.438 21:39:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:14.438 21:39:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:14.696 [2024-12-06 21:39:35.042436] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:14.697 [2024-12-06 21:39:35.042517] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:14.697 21:39:35 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:14.697 21:39:35 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:14.697 21:39:35 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:14.697 21:39:35 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.971 21:39:35 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:14.971 21:39:35 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:14.971 21:39:35 -- bdev/bdev_raid.sh@287 -- # killprocess 74397 00:17:14.971 21:39:35 -- common/autotest_common.sh@936 -- # '[' -z 74397 ']' 00:17:14.971 21:39:35 -- common/autotest_common.sh@940 -- # kill -0 74397 00:17:14.971 21:39:35 -- common/autotest_common.sh@941 -- # uname 00:17:14.971 21:39:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:14.971 21:39:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74397 00:17:14.971 killing process with pid 74397 00:17:14.971 21:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:14.971 21:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:14.971 21:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74397' 00:17:14.971 21:39:35 -- common/autotest_common.sh@955 -- # kill 74397 00:17:14.971 [2024-12-06 21:39:35.400033] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:14.971 21:39:35 -- common/autotest_common.sh@960 -- # wait 74397 00:17:14.971 [2024-12-06 21:39:35.400138] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:16.344 ************************************ 00:17:16.344 END TEST raid_state_function_test 00:17:16.344 ************************************ 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:16.344 00:17:16.344 real 0m11.667s 00:17:16.344 user 0m19.535s 00:17:16.344 sys 0m1.738s 00:17:16.344 21:39:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:16.344 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:16.344 21:39:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:16.344 21:39:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:16.344 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:17:16.344 ************************************ 00:17:16.344 START TEST raid_state_function_test_sb 00:17:16.344 ************************************ 00:17:16.344 21:39:36 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid0 4 true 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@226 -- # raid_pid=74785 00:17:16.344 Process raid pid: 74785 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 74785' 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@228 -- # waitforlisten 74785 /var/tmp/spdk-raid.sock 00:17:16.344 21:39:36 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:16.344 21:39:36 -- common/autotest_common.sh@829 -- # '[' -z 74785 ']' 00:17:16.344 21:39:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:16.344 21:39:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:16.344 21:39:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:16.344 21:39:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.344 21:39:36 -- common/autotest_common.sh@10 -- # set +x 00:17:16.344 [2024-12-06 21:39:36.569383] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:16.344 [2024-12-06 21:39:36.570152] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.344 [2024-12-06 21:39:36.742679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.602 [2024-12-06 21:39:36.910770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.602 [2024-12-06 21:39:37.077198] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:17.168 21:39:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.168 21:39:37 -- common/autotest_common.sh@862 -- # return 0 00:17:17.168 21:39:37 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:17.427 [2024-12-06 21:39:37.730269] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:17.427 [2024-12-06 21:39:37.730335] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:17.427 [2024-12-06 21:39:37.730349] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:17.427 [2024-12-06 21:39:37.730363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:17.427 [2024-12-06 21:39:37.730372] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:17.427 [2024-12-06 21:39:37.730385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:17.427 [2024-12-06 21:39:37.730393] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:17.427 [2024-12-06 21:39:37.730406] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.427 21:39:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.686 21:39:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.686 "name": "Existed_Raid", 00:17:17.686 "uuid": "9136cf11-7aba-49f5-8533-2fb2f3f8e8f5", 00:17:17.686 "strip_size_kb": 64, 00:17:17.686 "state": "configuring", 00:17:17.686 "raid_level": "raid0", 00:17:17.686 "superblock": true, 00:17:17.686 "num_base_bdevs": 4, 00:17:17.686 "num_base_bdevs_discovered": 0, 00:17:17.686 "num_base_bdevs_operational": 4, 00:17:17.686 "base_bdevs_list": [ 00:17:17.686 { 00:17:17.686 "name": "BaseBdev1", 00:17:17.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.686 "is_configured": false, 00:17:17.686 "data_offset": 0, 00:17:17.686 "data_size": 0 00:17:17.686 }, 00:17:17.686 { 00:17:17.686 "name": "BaseBdev2", 00:17:17.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.686 "is_configured": false, 00:17:17.686 "data_offset": 0, 00:17:17.686 "data_size": 0 00:17:17.686 }, 00:17:17.686 { 00:17:17.686 "name": "BaseBdev3", 00:17:17.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.686 "is_configured": false, 00:17:17.686 "data_offset": 0, 00:17:17.686 "data_size": 0 00:17:17.686 }, 00:17:17.686 { 00:17:17.686 "name": "BaseBdev4", 00:17:17.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.686 "is_configured": false, 00:17:17.686 "data_offset": 0, 00:17:17.686 "data_size": 0 00:17:17.686 } 00:17:17.686 ] 00:17:17.686 }' 00:17:17.686 21:39:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.686 21:39:37 -- common/autotest_common.sh@10 -- # set +x 00:17:17.946 21:39:38 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:18.206 [2024-12-06 21:39:38.530316] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:18.206 [2024-12-06 21:39:38.530380] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:18.206 21:39:38 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:18.465 [2024-12-06 21:39:38.774450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.465 [2024-12-06 21:39:38.774529] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.465 [2024-12-06 21:39:38.774542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.465 [2024-12-06 21:39:38.774557] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.465 [2024-12-06 21:39:38.774565] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:18.465 [2024-12-06 21:39:38.774579] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:18.465 [2024-12-06 21:39:38.774587] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:18.465 [2024-12-06 21:39:38.774600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:18.465 21:39:38 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:18.724 [2024-12-06 21:39:39.006824] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:18.724 BaseBdev1 00:17:18.724 21:39:39 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:18.724 21:39:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:18.724 21:39:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.724 21:39:39 -- common/autotest_common.sh@899 -- # local i 00:17:18.724 21:39:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.724 21:39:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.724 21:39:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:18.724 21:39:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:18.983 [ 00:17:18.983 { 00:17:18.983 "name": "BaseBdev1", 00:17:18.983 "aliases": [ 00:17:18.983 "eedd1839-6831-46e1-984a-6314b08f4aac" 00:17:18.983 ], 00:17:18.983 "product_name": "Malloc disk", 00:17:18.983 "block_size": 512, 00:17:18.983 "num_blocks": 65536, 00:17:18.983 "uuid": "eedd1839-6831-46e1-984a-6314b08f4aac", 00:17:18.983 "assigned_rate_limits": { 00:17:18.983 "rw_ios_per_sec": 0, 00:17:18.983 "rw_mbytes_per_sec": 0, 00:17:18.983 "r_mbytes_per_sec": 0, 00:17:18.983 "w_mbytes_per_sec": 0 00:17:18.983 }, 00:17:18.984 "claimed": true, 00:17:18.984 "claim_type": "exclusive_write", 00:17:18.984 "zoned": false, 00:17:18.984 "supported_io_types": { 00:17:18.984 "read": true, 00:17:18.984 "write": true, 00:17:18.984 "unmap": true, 00:17:18.984 "write_zeroes": true, 00:17:18.984 "flush": true, 00:17:18.984 "reset": true, 00:17:18.984 "compare": false, 00:17:18.984 "compare_and_write": false, 00:17:18.984 "abort": true, 00:17:18.984 "nvme_admin": false, 00:17:18.984 "nvme_io": false 00:17:18.984 }, 00:17:18.984 "memory_domains": [ 00:17:18.984 { 00:17:18.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:18.984 "dma_device_type": 2 00:17:18.984 } 00:17:18.984 ], 00:17:18.984 "driver_specific": {} 00:17:18.984 } 00:17:18.984 ] 00:17:18.984 21:39:39 -- common/autotest_common.sh@905 -- # return 0 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.984 21:39:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.243 21:39:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:19.243 "name": "Existed_Raid", 00:17:19.243 "uuid": "04037238-ee55-4249-b7b9-61facd1d9f66", 00:17:19.243 "strip_size_kb": 64, 00:17:19.243 "state": "configuring", 00:17:19.243 "raid_level": "raid0", 00:17:19.243 "superblock": true, 00:17:19.243 "num_base_bdevs": 4, 00:17:19.243 "num_base_bdevs_discovered": 1, 00:17:19.243 "num_base_bdevs_operational": 4, 00:17:19.243 "base_bdevs_list": [ 00:17:19.243 { 00:17:19.243 "name": "BaseBdev1", 00:17:19.243 "uuid": "eedd1839-6831-46e1-984a-6314b08f4aac", 00:17:19.243 "is_configured": true, 00:17:19.243 "data_offset": 2048, 00:17:19.243 "data_size": 63488 00:17:19.243 }, 00:17:19.243 { 00:17:19.243 "name": "BaseBdev2", 00:17:19.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.243 "is_configured": false, 00:17:19.243 "data_offset": 0, 00:17:19.243 "data_size": 0 00:17:19.243 }, 00:17:19.243 { 00:17:19.243 "name": "BaseBdev3", 00:17:19.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.243 "is_configured": false, 00:17:19.243 "data_offset": 0, 00:17:19.243 "data_size": 0 00:17:19.243 }, 00:17:19.243 { 00:17:19.243 "name": "BaseBdev4", 00:17:19.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.243 "is_configured": false, 00:17:19.243 "data_offset": 0, 00:17:19.243 "data_size": 0 00:17:19.243 } 00:17:19.243 ] 00:17:19.243 }' 00:17:19.243 21:39:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:19.243 21:39:39 -- common/autotest_common.sh@10 -- # set +x 00:17:19.503 21:39:39 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:19.762 [2024-12-06 21:39:40.091290] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.762 [2024-12-06 21:39:40.091351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:19.762 21:39:40 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:19.762 21:39:40 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:20.021 21:39:40 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:20.285 BaseBdev1 00:17:20.285 21:39:40 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:20.285 21:39:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:20.285 21:39:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:20.285 21:39:40 -- common/autotest_common.sh@899 -- # local i 00:17:20.285 21:39:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:20.285 21:39:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:20.285 21:39:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:20.543 21:39:40 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:20.543 [ 00:17:20.543 { 00:17:20.543 "name": "BaseBdev1", 00:17:20.543 "aliases": [ 00:17:20.543 "d343951a-de78-4e92-ada8-4b5025fc3fe5" 00:17:20.543 ], 00:17:20.543 "product_name": "Malloc disk", 00:17:20.543 "block_size": 512, 00:17:20.543 "num_blocks": 65536, 00:17:20.544 "uuid": "d343951a-de78-4e92-ada8-4b5025fc3fe5", 00:17:20.544 "assigned_rate_limits": { 00:17:20.544 "rw_ios_per_sec": 0, 00:17:20.544 "rw_mbytes_per_sec": 0, 00:17:20.544 "r_mbytes_per_sec": 0, 00:17:20.544 "w_mbytes_per_sec": 0 00:17:20.544 }, 00:17:20.544 "claimed": false, 00:17:20.544 "zoned": false, 00:17:20.544 "supported_io_types": { 00:17:20.544 "read": true, 00:17:20.544 "write": true, 00:17:20.544 "unmap": true, 00:17:20.544 "write_zeroes": true, 00:17:20.544 "flush": true, 00:17:20.544 "reset": true, 00:17:20.544 "compare": false, 00:17:20.544 "compare_and_write": false, 00:17:20.544 "abort": true, 00:17:20.544 "nvme_admin": false, 00:17:20.544 "nvme_io": false 00:17:20.544 }, 00:17:20.544 "memory_domains": [ 00:17:20.544 { 00:17:20.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.544 "dma_device_type": 2 00:17:20.544 } 00:17:20.544 ], 00:17:20.544 "driver_specific": {} 00:17:20.544 } 00:17:20.544 ] 00:17:20.544 21:39:41 -- common/autotest_common.sh@905 -- # return 0 00:17:20.544 21:39:41 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:20.802 [2024-12-06 21:39:41.207352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.802 [2024-12-06 21:39:41.209425] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.802 [2024-12-06 21:39:41.209691] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.802 [2024-12-06 21:39:41.209719] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:20.802 [2024-12-06 21:39:41.209737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:20.802 [2024-12-06 21:39:41.209748] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:20.802 [2024-12-06 21:39:41.209765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:20.802 21:39:41 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:20.802 21:39:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:20.802 21:39:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:20.802 21:39:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:20.802 21:39:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.803 21:39:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.061 21:39:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:21.061 "name": "Existed_Raid", 00:17:21.061 "uuid": "47451d27-5be4-4de4-b01e-bc8d9ff47be1", 00:17:21.061 "strip_size_kb": 64, 00:17:21.061 "state": "configuring", 00:17:21.061 "raid_level": "raid0", 00:17:21.061 "superblock": true, 00:17:21.061 "num_base_bdevs": 4, 00:17:21.061 "num_base_bdevs_discovered": 1, 00:17:21.061 "num_base_bdevs_operational": 4, 00:17:21.061 "base_bdevs_list": [ 00:17:21.061 { 00:17:21.061 "name": "BaseBdev1", 00:17:21.061 "uuid": "d343951a-de78-4e92-ada8-4b5025fc3fe5", 00:17:21.061 "is_configured": true, 00:17:21.061 "data_offset": 2048, 00:17:21.061 "data_size": 63488 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev2", 00:17:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.061 "is_configured": false, 00:17:21.061 "data_offset": 0, 00:17:21.061 "data_size": 0 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev3", 00:17:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.061 "is_configured": false, 00:17:21.061 "data_offset": 0, 00:17:21.061 "data_size": 0 00:17:21.061 }, 00:17:21.061 { 00:17:21.061 "name": "BaseBdev4", 00:17:21.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.061 "is_configured": false, 00:17:21.061 "data_offset": 0, 00:17:21.061 "data_size": 0 00:17:21.061 } 00:17:21.061 ] 00:17:21.061 }' 00:17:21.061 21:39:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:21.061 21:39:41 -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 21:39:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:21.579 [2024-12-06 21:39:42.018350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.579 BaseBdev2 00:17:21.579 21:39:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:21.579 21:39:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:21.579 21:39:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:21.579 21:39:42 -- common/autotest_common.sh@899 -- # local i 00:17:21.579 21:39:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:21.579 21:39:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:21.579 21:39:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:21.837 21:39:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:22.096 [ 00:17:22.096 { 00:17:22.096 "name": "BaseBdev2", 00:17:22.096 "aliases": [ 00:17:22.096 "cb9760b0-62bc-4a9c-a745-8afc6a6e2940" 00:17:22.096 ], 00:17:22.096 "product_name": "Malloc disk", 00:17:22.096 "block_size": 512, 00:17:22.096 "num_blocks": 65536, 00:17:22.096 "uuid": "cb9760b0-62bc-4a9c-a745-8afc6a6e2940", 00:17:22.096 "assigned_rate_limits": { 00:17:22.096 "rw_ios_per_sec": 0, 00:17:22.096 "rw_mbytes_per_sec": 0, 00:17:22.096 "r_mbytes_per_sec": 0, 00:17:22.096 "w_mbytes_per_sec": 0 00:17:22.096 }, 00:17:22.096 "claimed": true, 00:17:22.096 "claim_type": "exclusive_write", 00:17:22.096 "zoned": false, 00:17:22.096 "supported_io_types": { 00:17:22.096 "read": true, 00:17:22.096 "write": true, 00:17:22.096 "unmap": true, 00:17:22.096 "write_zeroes": true, 00:17:22.096 "flush": true, 00:17:22.096 "reset": true, 00:17:22.096 "compare": false, 00:17:22.096 "compare_and_write": false, 00:17:22.096 "abort": true, 00:17:22.096 "nvme_admin": false, 00:17:22.096 "nvme_io": false 00:17:22.096 }, 00:17:22.096 "memory_domains": [ 00:17:22.096 { 00:17:22.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.096 "dma_device_type": 2 00:17:22.096 } 00:17:22.096 ], 00:17:22.096 "driver_specific": {} 00:17:22.096 } 00:17:22.096 ] 00:17:22.096 21:39:42 -- common/autotest_common.sh@905 -- # return 0 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.096 21:39:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:22.356 21:39:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:22.356 "name": "Existed_Raid", 00:17:22.356 "uuid": "47451d27-5be4-4de4-b01e-bc8d9ff47be1", 00:17:22.356 "strip_size_kb": 64, 00:17:22.356 "state": "configuring", 00:17:22.356 "raid_level": "raid0", 00:17:22.356 "superblock": true, 00:17:22.356 "num_base_bdevs": 4, 00:17:22.356 "num_base_bdevs_discovered": 2, 00:17:22.356 "num_base_bdevs_operational": 4, 00:17:22.356 "base_bdevs_list": [ 00:17:22.356 { 00:17:22.356 "name": "BaseBdev1", 00:17:22.356 "uuid": "d343951a-de78-4e92-ada8-4b5025fc3fe5", 00:17:22.356 "is_configured": true, 00:17:22.356 "data_offset": 2048, 00:17:22.356 "data_size": 63488 00:17:22.356 }, 00:17:22.356 { 00:17:22.356 "name": "BaseBdev2", 00:17:22.356 "uuid": "cb9760b0-62bc-4a9c-a745-8afc6a6e2940", 00:17:22.356 "is_configured": true, 00:17:22.356 "data_offset": 2048, 00:17:22.356 "data_size": 63488 00:17:22.356 }, 00:17:22.356 { 00:17:22.356 "name": "BaseBdev3", 00:17:22.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.356 "is_configured": false, 00:17:22.356 "data_offset": 0, 00:17:22.356 "data_size": 0 00:17:22.356 }, 00:17:22.356 { 00:17:22.356 "name": "BaseBdev4", 00:17:22.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.356 "is_configured": false, 00:17:22.356 "data_offset": 0, 00:17:22.356 "data_size": 0 00:17:22.356 } 00:17:22.356 ] 00:17:22.356 }' 00:17:22.356 21:39:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:22.356 21:39:42 -- common/autotest_common.sh@10 -- # set +x 00:17:22.614 21:39:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:22.871 [2024-12-06 21:39:43.173686] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:22.871 BaseBdev3 00:17:22.871 21:39:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:22.871 21:39:43 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:22.871 21:39:43 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:22.871 21:39:43 -- common/autotest_common.sh@899 -- # local i 00:17:22.871 21:39:43 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:22.871 21:39:43 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:22.871 21:39:43 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:23.129 21:39:43 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:23.129 [ 00:17:23.129 { 00:17:23.129 "name": "BaseBdev3", 00:17:23.129 "aliases": [ 00:17:23.129 "8efe555a-bcea-4526-a128-2ae6466d0215" 00:17:23.129 ], 00:17:23.129 "product_name": "Malloc disk", 00:17:23.129 "block_size": 512, 00:17:23.129 "num_blocks": 65536, 00:17:23.129 "uuid": "8efe555a-bcea-4526-a128-2ae6466d0215", 00:17:23.129 "assigned_rate_limits": { 00:17:23.129 "rw_ios_per_sec": 0, 00:17:23.129 "rw_mbytes_per_sec": 0, 00:17:23.129 "r_mbytes_per_sec": 0, 00:17:23.129 "w_mbytes_per_sec": 0 00:17:23.129 }, 00:17:23.129 "claimed": true, 00:17:23.129 "claim_type": "exclusive_write", 00:17:23.129 "zoned": false, 00:17:23.129 "supported_io_types": { 00:17:23.129 "read": true, 00:17:23.129 "write": true, 00:17:23.129 "unmap": true, 00:17:23.129 "write_zeroes": true, 00:17:23.129 "flush": true, 00:17:23.129 "reset": true, 00:17:23.129 "compare": false, 00:17:23.129 "compare_and_write": false, 00:17:23.129 "abort": true, 00:17:23.129 "nvme_admin": false, 00:17:23.129 "nvme_io": false 00:17:23.129 }, 00:17:23.129 "memory_domains": [ 00:17:23.129 { 00:17:23.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:23.129 "dma_device_type": 2 00:17:23.129 } 00:17:23.129 ], 00:17:23.129 "driver_specific": {} 00:17:23.129 } 00:17:23.129 ] 00:17:23.129 21:39:43 -- common/autotest_common.sh@905 -- # return 0 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.129 21:39:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.387 21:39:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.387 "name": "Existed_Raid", 00:17:23.387 "uuid": "47451d27-5be4-4de4-b01e-bc8d9ff47be1", 00:17:23.387 "strip_size_kb": 64, 00:17:23.387 "state": "configuring", 00:17:23.387 "raid_level": "raid0", 00:17:23.387 "superblock": true, 00:17:23.387 "num_base_bdevs": 4, 00:17:23.387 "num_base_bdevs_discovered": 3, 00:17:23.387 "num_base_bdevs_operational": 4, 00:17:23.387 "base_bdevs_list": [ 00:17:23.387 { 00:17:23.387 "name": "BaseBdev1", 00:17:23.387 "uuid": "d343951a-de78-4e92-ada8-4b5025fc3fe5", 00:17:23.387 "is_configured": true, 00:17:23.387 "data_offset": 2048, 00:17:23.387 "data_size": 63488 00:17:23.387 }, 00:17:23.387 { 00:17:23.387 "name": "BaseBdev2", 00:17:23.387 "uuid": "cb9760b0-62bc-4a9c-a745-8afc6a6e2940", 00:17:23.387 "is_configured": true, 00:17:23.387 "data_offset": 2048, 00:17:23.387 "data_size": 63488 00:17:23.387 }, 00:17:23.387 { 00:17:23.387 "name": "BaseBdev3", 00:17:23.387 "uuid": "8efe555a-bcea-4526-a128-2ae6466d0215", 00:17:23.387 "is_configured": true, 00:17:23.387 "data_offset": 2048, 00:17:23.387 "data_size": 63488 00:17:23.387 }, 00:17:23.387 { 00:17:23.387 "name": "BaseBdev4", 00:17:23.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.387 "is_configured": false, 00:17:23.387 "data_offset": 0, 00:17:23.387 "data_size": 0 00:17:23.387 } 00:17:23.387 ] 00:17:23.387 }' 00:17:23.387 21:39:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.387 21:39:43 -- common/autotest_common.sh@10 -- # set +x 00:17:23.645 21:39:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:23.904 [2024-12-06 21:39:44.341229] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:23.904 [2024-12-06 21:39:44.341688] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:23.904 [2024-12-06 21:39:44.341872] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:23.904 [2024-12-06 21:39:44.342038] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:23.904 [2024-12-06 21:39:44.342419] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:23.904 BaseBdev4 00:17:23.904 [2024-12-06 21:39:44.342631] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:23.904 [2024-12-06 21:39:44.342961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.904 21:39:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:23.904 21:39:44 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:23.904 21:39:44 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:23.904 21:39:44 -- common/autotest_common.sh@899 -- # local i 00:17:23.904 21:39:44 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:23.904 21:39:44 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:23.904 21:39:44 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:24.162 21:39:44 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:24.418 [ 00:17:24.418 { 00:17:24.418 "name": "BaseBdev4", 00:17:24.418 "aliases": [ 00:17:24.418 "dffb6c21-1104-41a1-bfdb-453d7aaf5f7b" 00:17:24.418 ], 00:17:24.418 "product_name": "Malloc disk", 00:17:24.418 "block_size": 512, 00:17:24.418 "num_blocks": 65536, 00:17:24.418 "uuid": "dffb6c21-1104-41a1-bfdb-453d7aaf5f7b", 00:17:24.418 "assigned_rate_limits": { 00:17:24.418 "rw_ios_per_sec": 0, 00:17:24.418 "rw_mbytes_per_sec": 0, 00:17:24.419 "r_mbytes_per_sec": 0, 00:17:24.419 "w_mbytes_per_sec": 0 00:17:24.419 }, 00:17:24.419 "claimed": true, 00:17:24.419 "claim_type": "exclusive_write", 00:17:24.419 "zoned": false, 00:17:24.419 "supported_io_types": { 00:17:24.419 "read": true, 00:17:24.419 "write": true, 00:17:24.419 "unmap": true, 00:17:24.419 "write_zeroes": true, 00:17:24.419 "flush": true, 00:17:24.419 "reset": true, 00:17:24.419 "compare": false, 00:17:24.419 "compare_and_write": false, 00:17:24.419 "abort": true, 00:17:24.419 "nvme_admin": false, 00:17:24.419 "nvme_io": false 00:17:24.419 }, 00:17:24.419 "memory_domains": [ 00:17:24.419 { 00:17:24.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.419 "dma_device_type": 2 00:17:24.419 } 00:17:24.419 ], 00:17:24.419 "driver_specific": {} 00:17:24.419 } 00:17:24.419 ] 00:17:24.419 21:39:44 -- common/autotest_common.sh@905 -- # return 0 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.419 21:39:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:24.676 21:39:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:24.676 "name": "Existed_Raid", 00:17:24.676 "uuid": "47451d27-5be4-4de4-b01e-bc8d9ff47be1", 00:17:24.676 "strip_size_kb": 64, 00:17:24.676 "state": "online", 00:17:24.676 "raid_level": "raid0", 00:17:24.676 "superblock": true, 00:17:24.676 "num_base_bdevs": 4, 00:17:24.676 "num_base_bdevs_discovered": 4, 00:17:24.676 "num_base_bdevs_operational": 4, 00:17:24.676 "base_bdevs_list": [ 00:17:24.676 { 00:17:24.676 "name": "BaseBdev1", 00:17:24.676 "uuid": "d343951a-de78-4e92-ada8-4b5025fc3fe5", 00:17:24.676 "is_configured": true, 00:17:24.676 "data_offset": 2048, 00:17:24.676 "data_size": 63488 00:17:24.676 }, 00:17:24.676 { 00:17:24.676 "name": "BaseBdev2", 00:17:24.676 "uuid": "cb9760b0-62bc-4a9c-a745-8afc6a6e2940", 00:17:24.676 "is_configured": true, 00:17:24.676 "data_offset": 2048, 00:17:24.676 "data_size": 63488 00:17:24.676 }, 00:17:24.676 { 00:17:24.676 "name": "BaseBdev3", 00:17:24.676 "uuid": "8efe555a-bcea-4526-a128-2ae6466d0215", 00:17:24.676 "is_configured": true, 00:17:24.676 "data_offset": 2048, 00:17:24.676 "data_size": 63488 00:17:24.676 }, 00:17:24.676 { 00:17:24.676 "name": "BaseBdev4", 00:17:24.676 "uuid": "dffb6c21-1104-41a1-bfdb-453d7aaf5f7b", 00:17:24.676 "is_configured": true, 00:17:24.676 "data_offset": 2048, 00:17:24.676 "data_size": 63488 00:17:24.676 } 00:17:24.676 ] 00:17:24.676 }' 00:17:24.676 21:39:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:24.676 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:17:24.934 21:39:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:25.192 [2024-12-06 21:39:45.497635] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:25.192 [2024-12-06 21:39:45.497840] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:25.192 [2024-12-06 21:39:45.498038] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.192 21:39:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.450 21:39:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.450 "name": "Existed_Raid", 00:17:25.450 "uuid": "47451d27-5be4-4de4-b01e-bc8d9ff47be1", 00:17:25.450 "strip_size_kb": 64, 00:17:25.450 "state": "offline", 00:17:25.450 "raid_level": "raid0", 00:17:25.450 "superblock": true, 00:17:25.450 "num_base_bdevs": 4, 00:17:25.450 "num_base_bdevs_discovered": 3, 00:17:25.450 "num_base_bdevs_operational": 3, 00:17:25.450 "base_bdevs_list": [ 00:17:25.450 { 00:17:25.450 "name": null, 00:17:25.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.450 "is_configured": false, 00:17:25.450 "data_offset": 2048, 00:17:25.450 "data_size": 63488 00:17:25.450 }, 00:17:25.450 { 00:17:25.450 "name": "BaseBdev2", 00:17:25.450 "uuid": "cb9760b0-62bc-4a9c-a745-8afc6a6e2940", 00:17:25.450 "is_configured": true, 00:17:25.450 "data_offset": 2048, 00:17:25.450 "data_size": 63488 00:17:25.450 }, 00:17:25.450 { 00:17:25.450 "name": "BaseBdev3", 00:17:25.450 "uuid": "8efe555a-bcea-4526-a128-2ae6466d0215", 00:17:25.450 "is_configured": true, 00:17:25.450 "data_offset": 2048, 00:17:25.450 "data_size": 63488 00:17:25.450 }, 00:17:25.450 { 00:17:25.450 "name": "BaseBdev4", 00:17:25.450 "uuid": "dffb6c21-1104-41a1-bfdb-453d7aaf5f7b", 00:17:25.450 "is_configured": true, 00:17:25.450 "data_offset": 2048, 00:17:25.450 "data_size": 63488 00:17:25.450 } 00:17:25.450 ] 00:17:25.450 }' 00:17:25.450 21:39:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.450 21:39:45 -- common/autotest_common.sh@10 -- # set +x 00:17:25.708 21:39:46 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:25.708 21:39:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:25.708 21:39:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.708 21:39:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:25.966 21:39:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:25.966 21:39:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:25.967 21:39:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:26.225 [2024-12-06 21:39:46.595616] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:26.225 21:39:46 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:26.225 21:39:46 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:26.225 21:39:46 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.225 21:39:46 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:26.501 21:39:46 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:26.501 21:39:46 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:26.501 21:39:46 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:26.761 [2024-12-06 21:39:47.170227] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:27.019 21:39:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.019 21:39:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.019 21:39:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:27.019 21:39:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.278 21:39:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:27.278 21:39:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:27.278 21:39:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:27.278 [2024-12-06 21:39:47.707995] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:27.278 [2024-12-06 21:39:47.708059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:17:27.550 21:39:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:27.550 21:39:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:27.550 21:39:47 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.550 21:39:47 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:27.827 21:39:48 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:27.827 21:39:48 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:27.827 21:39:48 -- bdev/bdev_raid.sh@287 -- # killprocess 74785 00:17:27.827 21:39:48 -- common/autotest_common.sh@936 -- # '[' -z 74785 ']' 00:17:27.827 21:39:48 -- common/autotest_common.sh@940 -- # kill -0 74785 00:17:27.827 21:39:48 -- common/autotest_common.sh@941 -- # uname 00:17:27.827 21:39:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:27.827 21:39:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74785 00:17:27.827 killing process with pid 74785 00:17:27.827 21:39:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:27.827 21:39:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:27.827 21:39:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74785' 00:17:27.827 21:39:48 -- common/autotest_common.sh@955 -- # kill 74785 00:17:27.827 21:39:48 -- common/autotest_common.sh@960 -- # wait 74785 00:17:27.827 [2024-12-06 21:39:48.085063] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.827 [2024-12-06 21:39:48.085196] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.765 ************************************ 00:17:28.765 END TEST raid_state_function_test_sb 00:17:28.765 ************************************ 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:28.765 00:17:28.765 real 0m12.626s 00:17:28.765 user 0m21.285s 00:17:28.765 sys 0m1.764s 00:17:28.765 21:39:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:28.765 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:28.765 21:39:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:28.765 21:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.765 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:17:28.765 ************************************ 00:17:28.765 START TEST raid_superblock_test 00:17:28.765 ************************************ 00:17:28.765 21:39:49 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid0 4 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:28.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@357 -- # raid_pid=75193 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@358 -- # waitforlisten 75193 /var/tmp/spdk-raid.sock 00:17:28.765 21:39:49 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:28.765 21:39:49 -- common/autotest_common.sh@829 -- # '[' -z 75193 ']' 00:17:28.765 21:39:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:28.765 21:39:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.765 21:39:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:28.765 21:39:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.765 21:39:49 -- common/autotest_common.sh@10 -- # set +x 00:17:29.025 [2024-12-06 21:39:49.274663] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:29.025 [2024-12-06 21:39:49.274914] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75193 ] 00:17:29.025 [2024-12-06 21:39:49.467196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.284 [2024-12-06 21:39:49.633124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.543 [2024-12-06 21:39:49.799194] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.801 21:39:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:29.801 21:39:50 -- common/autotest_common.sh@862 -- # return 0 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:29.801 21:39:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:30.061 malloc1 00:17:30.061 21:39:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:30.322 [2024-12-06 21:39:50.698267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:30.322 [2024-12-06 21:39:50.698360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.322 [2024-12-06 21:39:50.698405] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:17:30.322 [2024-12-06 21:39:50.698436] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.322 [2024-12-06 21:39:50.700922] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.322 [2024-12-06 21:39:50.701135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:30.322 pt1 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.322 21:39:50 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:30.581 malloc2 00:17:30.581 21:39:50 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:30.841 [2024-12-06 21:39:51.182199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:30.841 [2024-12-06 21:39:51.182286] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.841 [2024-12-06 21:39:51.182320] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:17:30.841 [2024-12-06 21:39:51.182335] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.841 [2024-12-06 21:39:51.184955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.841 [2024-12-06 21:39:51.185152] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:30.841 pt2 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:30.841 21:39:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:31.100 malloc3 00:17:31.100 21:39:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:31.360 [2024-12-06 21:39:51.656968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:31.360 [2024-12-06 21:39:51.657236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.360 [2024-12-06 21:39:51.657283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:17:31.360 [2024-12-06 21:39:51.657299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.360 [2024-12-06 21:39:51.659547] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.360 [2024-12-06 21:39:51.659587] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:31.360 pt3 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:31.360 21:39:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:31.619 malloc4 00:17:31.619 21:39:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:31.619 [2024-12-06 21:39:52.073873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:31.619 [2024-12-06 21:39:52.073971] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:31.619 [2024-12-06 21:39:52.074008] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:17:31.619 [2024-12-06 21:39:52.074021] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:31.619 [2024-12-06 21:39:52.076278] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:31.619 [2024-12-06 21:39:52.076318] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:31.619 pt4 00:17:31.619 21:39:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:31.619 21:39:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:31.619 21:39:52 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:31.878 [2024-12-06 21:39:52.318087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:31.878 [2024-12-06 21:39:52.320302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:31.878 [2024-12-06 21:39:52.320476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:31.878 [2024-12-06 21:39:52.320554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:31.878 [2024-12-06 21:39:52.320812] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:17:31.878 [2024-12-06 21:39:52.320979] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:31.879 [2024-12-06 21:39:52.321129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:31.879 [2024-12-06 21:39:52.321577] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:17:31.879 [2024-12-06 21:39:52.321609] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:17:31.879 [2024-12-06 21:39:52.321796] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.879 21:39:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.138 21:39:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.138 "name": "raid_bdev1", 00:17:32.138 "uuid": "6c5e1f09-72ef-41bf-a9b0-04228bc9cad9", 00:17:32.138 "strip_size_kb": 64, 00:17:32.138 "state": "online", 00:17:32.138 "raid_level": "raid0", 00:17:32.138 "superblock": true, 00:17:32.138 "num_base_bdevs": 4, 00:17:32.138 "num_base_bdevs_discovered": 4, 00:17:32.138 "num_base_bdevs_operational": 4, 00:17:32.138 "base_bdevs_list": [ 00:17:32.138 { 00:17:32.138 "name": "pt1", 00:17:32.138 "uuid": "a177a61a-075f-56d8-b250-3f573afa54a9", 00:17:32.138 "is_configured": true, 00:17:32.138 "data_offset": 2048, 00:17:32.138 "data_size": 63488 00:17:32.138 }, 00:17:32.138 { 00:17:32.138 "name": "pt2", 00:17:32.138 "uuid": "b75d2fdf-21c5-5f64-89e8-c14e48a7f6b2", 00:17:32.138 "is_configured": true, 00:17:32.138 "data_offset": 2048, 00:17:32.138 "data_size": 63488 00:17:32.138 }, 00:17:32.138 { 00:17:32.138 "name": "pt3", 00:17:32.138 "uuid": "4b2ec517-eaa7-58c9-bef4-22a151f7ef03", 00:17:32.138 "is_configured": true, 00:17:32.138 "data_offset": 2048, 00:17:32.138 "data_size": 63488 00:17:32.138 }, 00:17:32.138 { 00:17:32.138 "name": "pt4", 00:17:32.138 "uuid": "c357908f-1b6a-531d-b15d-d310082cb74d", 00:17:32.138 "is_configured": true, 00:17:32.138 "data_offset": 2048, 00:17:32.138 "data_size": 63488 00:17:32.138 } 00:17:32.138 ] 00:17:32.138 }' 00:17:32.138 21:39:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.138 21:39:52 -- common/autotest_common.sh@10 -- # set +x 00:17:32.397 21:39:52 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:32.397 21:39:52 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:32.655 [2024-12-06 21:39:53.094498] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.655 21:39:53 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=6c5e1f09-72ef-41bf-a9b0-04228bc9cad9 00:17:32.655 21:39:53 -- bdev/bdev_raid.sh@380 -- # '[' -z 6c5e1f09-72ef-41bf-a9b0-04228bc9cad9 ']' 00:17:32.655 21:39:53 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:32.914 [2024-12-06 21:39:53.338262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:32.914 [2024-12-06 21:39:53.338302] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.914 [2024-12-06 21:39:53.338386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.914 [2024-12-06 21:39:53.338526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.914 [2024-12-06 21:39:53.338541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:17:32.915 21:39:53 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.915 21:39:53 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:33.173 21:39:53 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:33.173 21:39:53 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:33.173 21:39:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.174 21:39:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:33.432 21:39:53 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.432 21:39:53 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:33.691 21:39:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.691 21:39:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:33.949 21:39:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:33.949 21:39:54 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:34.208 21:39:54 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:34.208 21:39:54 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:34.467 21:39:54 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:34.467 21:39:54 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.467 21:39:54 -- common/autotest_common.sh@650 -- # local es=0 00:17:34.467 21:39:54 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.467 21:39:54 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.467 21:39:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.467 21:39:54 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.467 21:39:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.467 21:39:54 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.467 21:39:54 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:34.467 21:39:54 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:34.467 21:39:54 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:34.467 21:39:54 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:34.467 [2024-12-06 21:39:54.910633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:34.467 [2024-12-06 21:39:54.912631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:34.467 [2024-12-06 21:39:54.912895] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:34.467 [2024-12-06 21:39:54.912952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:34.467 [2024-12-06 21:39:54.913030] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:34.467 [2024-12-06 21:39:54.913090] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:34.467 [2024-12-06 21:39:54.913121] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:34.467 [2024-12-06 21:39:54.913147] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:34.467 [2024-12-06 21:39:54.913181] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:34.467 [2024-12-06 21:39:54.913196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:17:34.467 request: 00:17:34.467 { 00:17:34.467 "name": "raid_bdev1", 00:17:34.467 "raid_level": "raid0", 00:17:34.467 "base_bdevs": [ 00:17:34.467 "malloc1", 00:17:34.467 "malloc2", 00:17:34.467 "malloc3", 00:17:34.467 "malloc4" 00:17:34.467 ], 00:17:34.467 "superblock": false, 00:17:34.467 "strip_size_kb": 64, 00:17:34.467 "method": "bdev_raid_create", 00:17:34.467 "req_id": 1 00:17:34.467 } 00:17:34.467 Got JSON-RPC error response 00:17:34.467 response: 00:17:34.467 { 00:17:34.467 "code": -17, 00:17:34.467 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:34.467 } 00:17:34.467 21:39:54 -- common/autotest_common.sh@653 -- # es=1 00:17:34.467 21:39:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:34.467 21:39:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:34.467 21:39:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:34.467 21:39:54 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.467 21:39:54 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:34.726 21:39:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:34.726 21:39:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:34.726 21:39:55 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.983 [2024-12-06 21:39:55.358668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.983 [2024-12-06 21:39:55.358916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.983 [2024-12-06 21:39:55.359063] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:17:34.983 [2024-12-06 21:39:55.359192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.983 [2024-12-06 21:39:55.361640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.983 [2024-12-06 21:39:55.361836] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.983 [2024-12-06 21:39:55.362067] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:34.983 pt1 00:17:34.983 [2024-12-06 21:39:55.362243] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.983 21:39:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.241 21:39:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:35.241 "name": "raid_bdev1", 00:17:35.241 "uuid": "6c5e1f09-72ef-41bf-a9b0-04228bc9cad9", 00:17:35.241 "strip_size_kb": 64, 00:17:35.241 "state": "configuring", 00:17:35.241 "raid_level": "raid0", 00:17:35.241 "superblock": true, 00:17:35.241 "num_base_bdevs": 4, 00:17:35.241 "num_base_bdevs_discovered": 1, 00:17:35.241 "num_base_bdevs_operational": 4, 00:17:35.241 "base_bdevs_list": [ 00:17:35.241 { 00:17:35.241 "name": "pt1", 00:17:35.241 "uuid": "a177a61a-075f-56d8-b250-3f573afa54a9", 00:17:35.241 "is_configured": true, 00:17:35.241 "data_offset": 2048, 00:17:35.241 "data_size": 63488 00:17:35.241 }, 00:17:35.241 { 00:17:35.241 "name": null, 00:17:35.241 "uuid": "b75d2fdf-21c5-5f64-89e8-c14e48a7f6b2", 00:17:35.241 "is_configured": false, 00:17:35.241 "data_offset": 2048, 00:17:35.241 "data_size": 63488 00:17:35.241 }, 00:17:35.241 { 00:17:35.241 "name": null, 00:17:35.241 "uuid": "4b2ec517-eaa7-58c9-bef4-22a151f7ef03", 00:17:35.241 "is_configured": false, 00:17:35.241 "data_offset": 2048, 00:17:35.241 "data_size": 63488 00:17:35.241 }, 00:17:35.241 { 00:17:35.241 "name": null, 00:17:35.241 "uuid": "c357908f-1b6a-531d-b15d-d310082cb74d", 00:17:35.241 "is_configured": false, 00:17:35.241 "data_offset": 2048, 00:17:35.241 "data_size": 63488 00:17:35.241 } 00:17:35.241 ] 00:17:35.241 }' 00:17:35.241 21:39:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:35.241 21:39:55 -- common/autotest_common.sh@10 -- # set +x 00:17:35.498 21:39:55 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:35.498 21:39:55 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.757 [2024-12-06 21:39:56.087072] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.757 [2024-12-06 21:39:56.087161] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.757 [2024-12-06 21:39:56.087197] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:17:35.757 [2024-12-06 21:39:56.087212] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.757 [2024-12-06 21:39:56.087737] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.757 [2024-12-06 21:39:56.087779] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.757 [2024-12-06 21:39:56.087910] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:35.757 [2024-12-06 21:39:56.087959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.757 pt2 00:17:35.757 21:39:56 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:36.017 [2024-12-06 21:39:56.287133] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.017 21:39:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.276 21:39:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:36.276 "name": "raid_bdev1", 00:17:36.276 "uuid": "6c5e1f09-72ef-41bf-a9b0-04228bc9cad9", 00:17:36.276 "strip_size_kb": 64, 00:17:36.276 "state": "configuring", 00:17:36.276 "raid_level": "raid0", 00:17:36.276 "superblock": true, 00:17:36.276 "num_base_bdevs": 4, 00:17:36.276 "num_base_bdevs_discovered": 1, 00:17:36.276 "num_base_bdevs_operational": 4, 00:17:36.276 "base_bdevs_list": [ 00:17:36.276 { 00:17:36.276 "name": "pt1", 00:17:36.276 "uuid": "a177a61a-075f-56d8-b250-3f573afa54a9", 00:17:36.276 "is_configured": true, 00:17:36.276 "data_offset": 2048, 00:17:36.276 "data_size": 63488 00:17:36.276 }, 00:17:36.276 { 00:17:36.276 "name": null, 00:17:36.276 "uuid": "b75d2fdf-21c5-5f64-89e8-c14e48a7f6b2", 00:17:36.276 "is_configured": false, 00:17:36.276 "data_offset": 2048, 00:17:36.276 "data_size": 63488 00:17:36.276 }, 00:17:36.276 { 00:17:36.276 "name": null, 00:17:36.276 "uuid": "4b2ec517-eaa7-58c9-bef4-22a151f7ef03", 00:17:36.276 "is_configured": false, 00:17:36.276 "data_offset": 2048, 00:17:36.276 "data_size": 63488 00:17:36.276 }, 00:17:36.276 { 00:17:36.276 "name": null, 00:17:36.276 "uuid": "c357908f-1b6a-531d-b15d-d310082cb74d", 00:17:36.276 "is_configured": false, 00:17:36.276 "data_offset": 2048, 00:17:36.276 "data_size": 63488 00:17:36.276 } 00:17:36.276 ] 00:17:36.276 }' 00:17:36.276 21:39:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:36.276 21:39:56 -- common/autotest_common.sh@10 -- # set +x 00:17:36.535 21:39:56 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:36.535 21:39:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:36.535 21:39:56 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.794 [2024-12-06 21:39:57.087287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.794 [2024-12-06 21:39:57.087362] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.794 [2024-12-06 21:39:57.087389] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:17:36.794 [2024-12-06 21:39:57.087405] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.794 [2024-12-06 21:39:57.087959] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.794 [2024-12-06 21:39:57.087994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.794 [2024-12-06 21:39:57.088091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:36.794 [2024-12-06 21:39:57.088125] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.794 pt2 00:17:36.794 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:36.794 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:36.794 21:39:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:36.794 [2024-12-06 21:39:57.291380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:36.794 [2024-12-06 21:39:57.291506] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.794 [2024-12-06 21:39:57.291539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:17:36.794 [2024-12-06 21:39:57.291557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.053 [2024-12-06 21:39:57.292072] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.053 [2024-12-06 21:39:57.292119] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:37.053 [2024-12-06 21:39:57.292218] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:37.053 [2024-12-06 21:39:57.292258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:37.053 pt3 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:37.053 [2024-12-06 21:39:57.499377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:37.053 [2024-12-06 21:39:57.499490] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.053 [2024-12-06 21:39:57.499523] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:17:37.053 [2024-12-06 21:39:57.499539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.053 [2024-12-06 21:39:57.500048] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.053 [2024-12-06 21:39:57.500083] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:37.053 [2024-12-06 21:39:57.500176] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:17:37.053 [2024-12-06 21:39:57.500224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:37.053 [2024-12-06 21:39:57.500413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:17:37.053 [2024-12-06 21:39:57.500435] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:37.053 [2024-12-06 21:39:57.500554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:37.053 [2024-12-06 21:39:57.500938] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:17:37.053 [2024-12-06 21:39:57.501120] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:17:37.053 [2024-12-06 21:39:57.501284] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.053 pt4 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:37.053 21:39:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.054 21:39:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.312 21:39:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:37.312 "name": "raid_bdev1", 00:17:37.312 "uuid": "6c5e1f09-72ef-41bf-a9b0-04228bc9cad9", 00:17:37.312 "strip_size_kb": 64, 00:17:37.312 "state": "online", 00:17:37.312 "raid_level": "raid0", 00:17:37.312 "superblock": true, 00:17:37.312 "num_base_bdevs": 4, 00:17:37.312 "num_base_bdevs_discovered": 4, 00:17:37.312 "num_base_bdevs_operational": 4, 00:17:37.312 "base_bdevs_list": [ 00:17:37.312 { 00:17:37.312 "name": "pt1", 00:17:37.312 "uuid": "a177a61a-075f-56d8-b250-3f573afa54a9", 00:17:37.312 "is_configured": true, 00:17:37.312 "data_offset": 2048, 00:17:37.312 "data_size": 63488 00:17:37.312 }, 00:17:37.312 { 00:17:37.312 "name": "pt2", 00:17:37.312 "uuid": "b75d2fdf-21c5-5f64-89e8-c14e48a7f6b2", 00:17:37.312 "is_configured": true, 00:17:37.312 "data_offset": 2048, 00:17:37.312 "data_size": 63488 00:17:37.312 }, 00:17:37.312 { 00:17:37.312 "name": "pt3", 00:17:37.312 "uuid": "4b2ec517-eaa7-58c9-bef4-22a151f7ef03", 00:17:37.312 "is_configured": true, 00:17:37.312 "data_offset": 2048, 00:17:37.312 "data_size": 63488 00:17:37.312 }, 00:17:37.312 { 00:17:37.312 "name": "pt4", 00:17:37.312 "uuid": "c357908f-1b6a-531d-b15d-d310082cb74d", 00:17:37.312 "is_configured": true, 00:17:37.312 "data_offset": 2048, 00:17:37.312 "data_size": 63488 00:17:37.312 } 00:17:37.312 ] 00:17:37.312 }' 00:17:37.312 21:39:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:37.312 21:39:57 -- common/autotest_common.sh@10 -- # set +x 00:17:37.571 21:39:58 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.571 21:39:58 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:37.830 [2024-12-06 21:39:58.263832] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.830 21:39:58 -- bdev/bdev_raid.sh@430 -- # '[' 6c5e1f09-72ef-41bf-a9b0-04228bc9cad9 '!=' 6c5e1f09-72ef-41bf-a9b0-04228bc9cad9 ']' 00:17:37.830 21:39:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:17:37.830 21:39:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:37.830 21:39:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:37.830 21:39:58 -- bdev/bdev_raid.sh@511 -- # killprocess 75193 00:17:37.830 21:39:58 -- common/autotest_common.sh@936 -- # '[' -z 75193 ']' 00:17:37.830 21:39:58 -- common/autotest_common.sh@940 -- # kill -0 75193 00:17:37.830 21:39:58 -- common/autotest_common.sh@941 -- # uname 00:17:37.830 21:39:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.830 21:39:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75193 00:17:37.830 killing process with pid 75193 00:17:37.830 21:39:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:37.830 21:39:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:37.830 21:39:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75193' 00:17:37.830 21:39:58 -- common/autotest_common.sh@955 -- # kill 75193 00:17:37.830 [2024-12-06 21:39:58.318092] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:37.830 21:39:58 -- common/autotest_common.sh@960 -- # wait 75193 00:17:37.830 [2024-12-06 21:39:58.318166] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.830 [2024-12-06 21:39:58.318239] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.830 [2024-12-06 21:39:58.318252] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:17:38.398 [2024-12-06 21:39:58.603544] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:39.335 00:17:39.335 real 0m10.463s 00:17:39.335 user 0m17.331s 00:17:39.335 sys 0m1.466s 00:17:39.335 21:39:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:39.335 ************************************ 00:17:39.335 END TEST raid_superblock_test 00:17:39.335 ************************************ 00:17:39.335 21:39:59 -- common/autotest_common.sh@10 -- # set +x 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:17:39.335 21:39:59 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:39.335 21:39:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.335 21:39:59 -- common/autotest_common.sh@10 -- # set +x 00:17:39.335 ************************************ 00:17:39.335 START TEST raid_state_function_test 00:17:39.335 ************************************ 00:17:39.335 21:39:59 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 false 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=75484 00:17:39.335 Process raid pid: 75484 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 75484' 00:17:39.335 21:39:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 75484 /var/tmp/spdk-raid.sock 00:17:39.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.335 21:39:59 -- common/autotest_common.sh@829 -- # '[' -z 75484 ']' 00:17:39.335 21:39:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.335 21:39:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.335 21:39:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.335 21:39:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.335 21:39:59 -- common/autotest_common.sh@10 -- # set +x 00:17:39.335 [2024-12-06 21:39:59.775252] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:39.335 [2024-12-06 21:39:59.775406] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.594 [2024-12-06 21:39:59.944173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.852 [2024-12-06 21:40:00.119719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.852 [2024-12-06 21:40:00.282602] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.419 21:40:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.419 21:40:00 -- common/autotest_common.sh@862 -- # return 0 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:40.419 [2024-12-06 21:40:00.886785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:40.419 [2024-12-06 21:40:00.886865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:40.419 [2024-12-06 21:40:00.886894] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:40.419 [2024-12-06 21:40:00.886909] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:40.419 [2024-12-06 21:40:00.886917] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:40.419 [2024-12-06 21:40:00.886929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:40.419 [2024-12-06 21:40:00.886937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:40.419 [2024-12-06 21:40:00.886949] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.419 21:40:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.677 21:40:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.677 "name": "Existed_Raid", 00:17:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.677 "strip_size_kb": 64, 00:17:40.677 "state": "configuring", 00:17:40.677 "raid_level": "concat", 00:17:40.677 "superblock": false, 00:17:40.677 "num_base_bdevs": 4, 00:17:40.677 "num_base_bdevs_discovered": 0, 00:17:40.677 "num_base_bdevs_operational": 4, 00:17:40.677 "base_bdevs_list": [ 00:17:40.677 { 00:17:40.677 "name": "BaseBdev1", 00:17:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.677 "is_configured": false, 00:17:40.677 "data_offset": 0, 00:17:40.677 "data_size": 0 00:17:40.677 }, 00:17:40.677 { 00:17:40.677 "name": "BaseBdev2", 00:17:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.677 "is_configured": false, 00:17:40.677 "data_offset": 0, 00:17:40.677 "data_size": 0 00:17:40.677 }, 00:17:40.677 { 00:17:40.677 "name": "BaseBdev3", 00:17:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.677 "is_configured": false, 00:17:40.677 "data_offset": 0, 00:17:40.677 "data_size": 0 00:17:40.677 }, 00:17:40.677 { 00:17:40.677 "name": "BaseBdev4", 00:17:40.677 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.677 "is_configured": false, 00:17:40.677 "data_offset": 0, 00:17:40.677 "data_size": 0 00:17:40.677 } 00:17:40.677 ] 00:17:40.677 }' 00:17:40.677 21:40:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.677 21:40:01 -- common/autotest_common.sh@10 -- # set +x 00:17:41.243 21:40:01 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.243 [2024-12-06 21:40:01.634895] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.243 [2024-12-06 21:40:01.634939] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:41.243 21:40:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:41.502 [2024-12-06 21:40:01.883017] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:41.502 [2024-12-06 21:40:01.883089] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:41.502 [2024-12-06 21:40:01.883103] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:41.502 [2024-12-06 21:40:01.883118] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:41.502 [2024-12-06 21:40:01.883127] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:41.502 [2024-12-06 21:40:01.883140] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:41.502 [2024-12-06 21:40:01.883148] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:41.502 [2024-12-06 21:40:01.883161] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:41.502 21:40:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.761 [2024-12-06 21:40:02.114152] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.761 BaseBdev1 00:17:41.761 21:40:02 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:41.761 21:40:02 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:41.761 21:40:02 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:41.761 21:40:02 -- common/autotest_common.sh@899 -- # local i 00:17:41.761 21:40:02 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:41.761 21:40:02 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:41.761 21:40:02 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.020 21:40:02 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.020 [ 00:17:42.020 { 00:17:42.020 "name": "BaseBdev1", 00:17:42.020 "aliases": [ 00:17:42.020 "7fafef62-c716-4100-ab62-ab3b0169515e" 00:17:42.020 ], 00:17:42.020 "product_name": "Malloc disk", 00:17:42.020 "block_size": 512, 00:17:42.020 "num_blocks": 65536, 00:17:42.020 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:42.020 "assigned_rate_limits": { 00:17:42.020 "rw_ios_per_sec": 0, 00:17:42.020 "rw_mbytes_per_sec": 0, 00:17:42.020 "r_mbytes_per_sec": 0, 00:17:42.020 "w_mbytes_per_sec": 0 00:17:42.020 }, 00:17:42.020 "claimed": true, 00:17:42.020 "claim_type": "exclusive_write", 00:17:42.020 "zoned": false, 00:17:42.020 "supported_io_types": { 00:17:42.020 "read": true, 00:17:42.020 "write": true, 00:17:42.020 "unmap": true, 00:17:42.020 "write_zeroes": true, 00:17:42.020 "flush": true, 00:17:42.020 "reset": true, 00:17:42.020 "compare": false, 00:17:42.020 "compare_and_write": false, 00:17:42.020 "abort": true, 00:17:42.020 "nvme_admin": false, 00:17:42.020 "nvme_io": false 00:17:42.020 }, 00:17:42.020 "memory_domains": [ 00:17:42.020 { 00:17:42.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.020 "dma_device_type": 2 00:17:42.020 } 00:17:42.020 ], 00:17:42.020 "driver_specific": {} 00:17:42.020 } 00:17:42.020 ] 00:17:42.021 21:40:02 -- common/autotest_common.sh@905 -- # return 0 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.021 21:40:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.279 21:40:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.279 21:40:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.279 21:40:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.279 "name": "Existed_Raid", 00:17:42.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.279 "strip_size_kb": 64, 00:17:42.279 "state": "configuring", 00:17:42.279 "raid_level": "concat", 00:17:42.279 "superblock": false, 00:17:42.279 "num_base_bdevs": 4, 00:17:42.279 "num_base_bdevs_discovered": 1, 00:17:42.279 "num_base_bdevs_operational": 4, 00:17:42.279 "base_bdevs_list": [ 00:17:42.279 { 00:17:42.279 "name": "BaseBdev1", 00:17:42.279 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:42.279 "is_configured": true, 00:17:42.279 "data_offset": 0, 00:17:42.279 "data_size": 65536 00:17:42.279 }, 00:17:42.279 { 00:17:42.279 "name": "BaseBdev2", 00:17:42.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.279 "is_configured": false, 00:17:42.279 "data_offset": 0, 00:17:42.279 "data_size": 0 00:17:42.279 }, 00:17:42.279 { 00:17:42.279 "name": "BaseBdev3", 00:17:42.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.279 "is_configured": false, 00:17:42.279 "data_offset": 0, 00:17:42.279 "data_size": 0 00:17:42.279 }, 00:17:42.280 { 00:17:42.280 "name": "BaseBdev4", 00:17:42.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.280 "is_configured": false, 00:17:42.280 "data_offset": 0, 00:17:42.280 "data_size": 0 00:17:42.280 } 00:17:42.280 ] 00:17:42.280 }' 00:17:42.280 21:40:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.280 21:40:02 -- common/autotest_common.sh@10 -- # set +x 00:17:42.865 21:40:03 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:42.865 [2024-12-06 21:40:03.298534] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:42.865 [2024-12-06 21:40:03.298584] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:42.865 21:40:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:42.865 21:40:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:43.124 [2024-12-06 21:40:03.542607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:43.124 [2024-12-06 21:40:03.544632] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:43.124 [2024-12-06 21:40:03.544941] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:43.124 [2024-12-06 21:40:03.544966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:43.124 [2024-12-06 21:40:03.544983] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:43.124 [2024-12-06 21:40:03.544992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:43.124 [2024-12-06 21:40:03.545007] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.124 21:40:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:43.382 21:40:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:43.382 "name": "Existed_Raid", 00:17:43.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.382 "strip_size_kb": 64, 00:17:43.382 "state": "configuring", 00:17:43.382 "raid_level": "concat", 00:17:43.382 "superblock": false, 00:17:43.382 "num_base_bdevs": 4, 00:17:43.382 "num_base_bdevs_discovered": 1, 00:17:43.382 "num_base_bdevs_operational": 4, 00:17:43.382 "base_bdevs_list": [ 00:17:43.382 { 00:17:43.382 "name": "BaseBdev1", 00:17:43.382 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:43.382 "is_configured": true, 00:17:43.382 "data_offset": 0, 00:17:43.382 "data_size": 65536 00:17:43.382 }, 00:17:43.382 { 00:17:43.382 "name": "BaseBdev2", 00:17:43.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.382 "is_configured": false, 00:17:43.382 "data_offset": 0, 00:17:43.382 "data_size": 0 00:17:43.382 }, 00:17:43.382 { 00:17:43.382 "name": "BaseBdev3", 00:17:43.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.382 "is_configured": false, 00:17:43.382 "data_offset": 0, 00:17:43.382 "data_size": 0 00:17:43.382 }, 00:17:43.382 { 00:17:43.382 "name": "BaseBdev4", 00:17:43.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.382 "is_configured": false, 00:17:43.382 "data_offset": 0, 00:17:43.382 "data_size": 0 00:17:43.382 } 00:17:43.382 ] 00:17:43.382 }' 00:17:43.382 21:40:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:43.382 21:40:03 -- common/autotest_common.sh@10 -- # set +x 00:17:43.641 21:40:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:43.900 [2024-12-06 21:40:04.285400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:43.900 BaseBdev2 00:17:43.900 21:40:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:43.900 21:40:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:43.900 21:40:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:43.900 21:40:04 -- common/autotest_common.sh@899 -- # local i 00:17:43.900 21:40:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:43.900 21:40:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:43.900 21:40:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.159 21:40:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.417 [ 00:17:44.417 { 00:17:44.417 "name": "BaseBdev2", 00:17:44.417 "aliases": [ 00:17:44.417 "ab587567-d2c2-4187-a922-75279fcc2657" 00:17:44.417 ], 00:17:44.417 "product_name": "Malloc disk", 00:17:44.417 "block_size": 512, 00:17:44.417 "num_blocks": 65536, 00:17:44.417 "uuid": "ab587567-d2c2-4187-a922-75279fcc2657", 00:17:44.417 "assigned_rate_limits": { 00:17:44.418 "rw_ios_per_sec": 0, 00:17:44.418 "rw_mbytes_per_sec": 0, 00:17:44.418 "r_mbytes_per_sec": 0, 00:17:44.418 "w_mbytes_per_sec": 0 00:17:44.418 }, 00:17:44.418 "claimed": true, 00:17:44.418 "claim_type": "exclusive_write", 00:17:44.418 "zoned": false, 00:17:44.418 "supported_io_types": { 00:17:44.418 "read": true, 00:17:44.418 "write": true, 00:17:44.418 "unmap": true, 00:17:44.418 "write_zeroes": true, 00:17:44.418 "flush": true, 00:17:44.418 "reset": true, 00:17:44.418 "compare": false, 00:17:44.418 "compare_and_write": false, 00:17:44.418 "abort": true, 00:17:44.418 "nvme_admin": false, 00:17:44.418 "nvme_io": false 00:17:44.418 }, 00:17:44.418 "memory_domains": [ 00:17:44.418 { 00:17:44.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.418 "dma_device_type": 2 00:17:44.418 } 00:17:44.418 ], 00:17:44.418 "driver_specific": {} 00:17:44.418 } 00:17:44.418 ] 00:17:44.418 21:40:04 -- common/autotest_common.sh@905 -- # return 0 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.418 21:40:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.689 21:40:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.689 "name": "Existed_Raid", 00:17:44.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.689 "strip_size_kb": 64, 00:17:44.689 "state": "configuring", 00:17:44.689 "raid_level": "concat", 00:17:44.689 "superblock": false, 00:17:44.689 "num_base_bdevs": 4, 00:17:44.689 "num_base_bdevs_discovered": 2, 00:17:44.689 "num_base_bdevs_operational": 4, 00:17:44.689 "base_bdevs_list": [ 00:17:44.689 { 00:17:44.690 "name": "BaseBdev1", 00:17:44.690 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:44.690 "is_configured": true, 00:17:44.690 "data_offset": 0, 00:17:44.690 "data_size": 65536 00:17:44.690 }, 00:17:44.690 { 00:17:44.690 "name": "BaseBdev2", 00:17:44.690 "uuid": "ab587567-d2c2-4187-a922-75279fcc2657", 00:17:44.690 "is_configured": true, 00:17:44.690 "data_offset": 0, 00:17:44.690 "data_size": 65536 00:17:44.690 }, 00:17:44.690 { 00:17:44.690 "name": "BaseBdev3", 00:17:44.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.690 "is_configured": false, 00:17:44.690 "data_offset": 0, 00:17:44.690 "data_size": 0 00:17:44.690 }, 00:17:44.690 { 00:17:44.690 "name": "BaseBdev4", 00:17:44.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.690 "is_configured": false, 00:17:44.690 "data_offset": 0, 00:17:44.690 "data_size": 0 00:17:44.690 } 00:17:44.690 ] 00:17:44.690 }' 00:17:44.690 21:40:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.690 21:40:04 -- common/autotest_common.sh@10 -- # set +x 00:17:44.955 21:40:05 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.213 [2024-12-06 21:40:05.464361] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.213 BaseBdev3 00:17:45.213 21:40:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:45.213 21:40:05 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:45.213 21:40:05 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:45.213 21:40:05 -- common/autotest_common.sh@899 -- # local i 00:17:45.213 21:40:05 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:45.213 21:40:05 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:45.213 21:40:05 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.470 21:40:05 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:45.470 [ 00:17:45.470 { 00:17:45.470 "name": "BaseBdev3", 00:17:45.470 "aliases": [ 00:17:45.470 "71f86652-5c61-4d68-b5c2-5034a7985221" 00:17:45.470 ], 00:17:45.470 "product_name": "Malloc disk", 00:17:45.470 "block_size": 512, 00:17:45.470 "num_blocks": 65536, 00:17:45.470 "uuid": "71f86652-5c61-4d68-b5c2-5034a7985221", 00:17:45.470 "assigned_rate_limits": { 00:17:45.470 "rw_ios_per_sec": 0, 00:17:45.470 "rw_mbytes_per_sec": 0, 00:17:45.470 "r_mbytes_per_sec": 0, 00:17:45.470 "w_mbytes_per_sec": 0 00:17:45.470 }, 00:17:45.470 "claimed": true, 00:17:45.470 "claim_type": "exclusive_write", 00:17:45.470 "zoned": false, 00:17:45.470 "supported_io_types": { 00:17:45.470 "read": true, 00:17:45.470 "write": true, 00:17:45.470 "unmap": true, 00:17:45.470 "write_zeroes": true, 00:17:45.470 "flush": true, 00:17:45.470 "reset": true, 00:17:45.470 "compare": false, 00:17:45.470 "compare_and_write": false, 00:17:45.470 "abort": true, 00:17:45.471 "nvme_admin": false, 00:17:45.471 "nvme_io": false 00:17:45.471 }, 00:17:45.471 "memory_domains": [ 00:17:45.471 { 00:17:45.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.471 "dma_device_type": 2 00:17:45.471 } 00:17:45.471 ], 00:17:45.471 "driver_specific": {} 00:17:45.471 } 00:17:45.471 ] 00:17:45.471 21:40:05 -- common/autotest_common.sh@905 -- # return 0 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.471 21:40:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.728 21:40:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:45.728 "name": "Existed_Raid", 00:17:45.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.728 "strip_size_kb": 64, 00:17:45.728 "state": "configuring", 00:17:45.728 "raid_level": "concat", 00:17:45.728 "superblock": false, 00:17:45.728 "num_base_bdevs": 4, 00:17:45.728 "num_base_bdevs_discovered": 3, 00:17:45.728 "num_base_bdevs_operational": 4, 00:17:45.728 "base_bdevs_list": [ 00:17:45.728 { 00:17:45.728 "name": "BaseBdev1", 00:17:45.728 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:45.728 "is_configured": true, 00:17:45.728 "data_offset": 0, 00:17:45.728 "data_size": 65536 00:17:45.728 }, 00:17:45.728 { 00:17:45.728 "name": "BaseBdev2", 00:17:45.728 "uuid": "ab587567-d2c2-4187-a922-75279fcc2657", 00:17:45.728 "is_configured": true, 00:17:45.728 "data_offset": 0, 00:17:45.728 "data_size": 65536 00:17:45.728 }, 00:17:45.728 { 00:17:45.728 "name": "BaseBdev3", 00:17:45.728 "uuid": "71f86652-5c61-4d68-b5c2-5034a7985221", 00:17:45.728 "is_configured": true, 00:17:45.728 "data_offset": 0, 00:17:45.728 "data_size": 65536 00:17:45.728 }, 00:17:45.728 { 00:17:45.728 "name": "BaseBdev4", 00:17:45.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.728 "is_configured": false, 00:17:45.728 "data_offset": 0, 00:17:45.728 "data_size": 0 00:17:45.728 } 00:17:45.728 ] 00:17:45.728 }' 00:17:45.728 21:40:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:45.728 21:40:06 -- common/autotest_common.sh@10 -- # set +x 00:17:45.985 21:40:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:46.244 [2024-12-06 21:40:06.675201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:46.244 [2024-12-06 21:40:06.675576] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:17:46.244 [2024-12-06 21:40:06.675758] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:46.244 [2024-12-06 21:40:06.675965] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:17:46.244 [2024-12-06 21:40:06.676348] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:17:46.244 [2024-12-06 21:40:06.676537] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:17:46.244 [2024-12-06 21:40:06.676994] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.244 BaseBdev4 00:17:46.244 21:40:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:46.244 21:40:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:46.244 21:40:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:46.244 21:40:06 -- common/autotest_common.sh@899 -- # local i 00:17:46.244 21:40:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:46.244 21:40:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:46.244 21:40:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:46.503 21:40:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:46.761 [ 00:17:46.761 { 00:17:46.761 "name": "BaseBdev4", 00:17:46.761 "aliases": [ 00:17:46.761 "f009daf2-bc63-4661-b065-1e596ddb9b10" 00:17:46.761 ], 00:17:46.761 "product_name": "Malloc disk", 00:17:46.761 "block_size": 512, 00:17:46.761 "num_blocks": 65536, 00:17:46.761 "uuid": "f009daf2-bc63-4661-b065-1e596ddb9b10", 00:17:46.761 "assigned_rate_limits": { 00:17:46.761 "rw_ios_per_sec": 0, 00:17:46.761 "rw_mbytes_per_sec": 0, 00:17:46.761 "r_mbytes_per_sec": 0, 00:17:46.761 "w_mbytes_per_sec": 0 00:17:46.761 }, 00:17:46.761 "claimed": true, 00:17:46.761 "claim_type": "exclusive_write", 00:17:46.761 "zoned": false, 00:17:46.761 "supported_io_types": { 00:17:46.761 "read": true, 00:17:46.761 "write": true, 00:17:46.761 "unmap": true, 00:17:46.761 "write_zeroes": true, 00:17:46.761 "flush": true, 00:17:46.761 "reset": true, 00:17:46.761 "compare": false, 00:17:46.761 "compare_and_write": false, 00:17:46.761 "abort": true, 00:17:46.761 "nvme_admin": false, 00:17:46.761 "nvme_io": false 00:17:46.761 }, 00:17:46.761 "memory_domains": [ 00:17:46.761 { 00:17:46.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.761 "dma_device_type": 2 00:17:46.761 } 00:17:46.761 ], 00:17:46.761 "driver_specific": {} 00:17:46.761 } 00:17:46.761 ] 00:17:46.761 21:40:07 -- common/autotest_common.sh@905 -- # return 0 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.761 21:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.019 21:40:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.019 "name": "Existed_Raid", 00:17:47.019 "uuid": "55cb7234-e245-43f3-be9a-bf9dd9035821", 00:17:47.019 "strip_size_kb": 64, 00:17:47.019 "state": "online", 00:17:47.019 "raid_level": "concat", 00:17:47.019 "superblock": false, 00:17:47.019 "num_base_bdevs": 4, 00:17:47.019 "num_base_bdevs_discovered": 4, 00:17:47.019 "num_base_bdevs_operational": 4, 00:17:47.019 "base_bdevs_list": [ 00:17:47.019 { 00:17:47.019 "name": "BaseBdev1", 00:17:47.019 "uuid": "7fafef62-c716-4100-ab62-ab3b0169515e", 00:17:47.019 "is_configured": true, 00:17:47.019 "data_offset": 0, 00:17:47.019 "data_size": 65536 00:17:47.019 }, 00:17:47.019 { 00:17:47.019 "name": "BaseBdev2", 00:17:47.019 "uuid": "ab587567-d2c2-4187-a922-75279fcc2657", 00:17:47.019 "is_configured": true, 00:17:47.019 "data_offset": 0, 00:17:47.019 "data_size": 65536 00:17:47.019 }, 00:17:47.019 { 00:17:47.019 "name": "BaseBdev3", 00:17:47.019 "uuid": "71f86652-5c61-4d68-b5c2-5034a7985221", 00:17:47.019 "is_configured": true, 00:17:47.019 "data_offset": 0, 00:17:47.019 "data_size": 65536 00:17:47.019 }, 00:17:47.019 { 00:17:47.019 "name": "BaseBdev4", 00:17:47.019 "uuid": "f009daf2-bc63-4661-b065-1e596ddb9b10", 00:17:47.020 "is_configured": true, 00:17:47.020 "data_offset": 0, 00:17:47.020 "data_size": 65536 00:17:47.020 } 00:17:47.020 ] 00:17:47.020 }' 00:17:47.020 21:40:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.020 21:40:07 -- common/autotest_common.sh@10 -- # set +x 00:17:47.277 21:40:07 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:47.536 [2024-12-06 21:40:07.851653] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.536 [2024-12-06 21:40:07.851871] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.536 [2024-12-06 21:40:07.852038] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.536 21:40:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.794 21:40:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.794 "name": "Existed_Raid", 00:17:47.794 "uuid": "55cb7234-e245-43f3-be9a-bf9dd9035821", 00:17:47.794 "strip_size_kb": 64, 00:17:47.794 "state": "offline", 00:17:47.794 "raid_level": "concat", 00:17:47.794 "superblock": false, 00:17:47.794 "num_base_bdevs": 4, 00:17:47.794 "num_base_bdevs_discovered": 3, 00:17:47.794 "num_base_bdevs_operational": 3, 00:17:47.794 "base_bdevs_list": [ 00:17:47.794 { 00:17:47.794 "name": null, 00:17:47.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.794 "is_configured": false, 00:17:47.794 "data_offset": 0, 00:17:47.794 "data_size": 65536 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "name": "BaseBdev2", 00:17:47.794 "uuid": "ab587567-d2c2-4187-a922-75279fcc2657", 00:17:47.794 "is_configured": true, 00:17:47.794 "data_offset": 0, 00:17:47.794 "data_size": 65536 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "name": "BaseBdev3", 00:17:47.794 "uuid": "71f86652-5c61-4d68-b5c2-5034a7985221", 00:17:47.794 "is_configured": true, 00:17:47.794 "data_offset": 0, 00:17:47.794 "data_size": 65536 00:17:47.794 }, 00:17:47.794 { 00:17:47.794 "name": "BaseBdev4", 00:17:47.794 "uuid": "f009daf2-bc63-4661-b065-1e596ddb9b10", 00:17:47.794 "is_configured": true, 00:17:47.794 "data_offset": 0, 00:17:47.794 "data_size": 65536 00:17:47.795 } 00:17:47.795 ] 00:17:47.795 }' 00:17:47.795 21:40:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.795 21:40:08 -- common/autotest_common.sh@10 -- # set +x 00:17:48.053 21:40:08 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:48.053 21:40:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.053 21:40:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.053 21:40:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:48.312 21:40:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:48.312 21:40:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.312 21:40:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:48.570 [2024-12-06 21:40:08.881584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:48.570 21:40:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:48.570 21:40:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:48.570 21:40:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.570 21:40:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:48.828 21:40:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:48.828 21:40:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:48.828 21:40:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:49.087 [2024-12-06 21:40:09.456140] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:49.087 21:40:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.087 21:40:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.087 21:40:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.087 21:40:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.346 21:40:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.346 21:40:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.346 21:40:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:49.609 [2024-12-06 21:40:09.986961] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:49.609 [2024-12-06 21:40:09.987026] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:17:49.609 21:40:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:49.609 21:40:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.609 21:40:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.609 21:40:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:49.873 21:40:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:49.873 21:40:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:49.873 21:40:10 -- bdev/bdev_raid.sh@287 -- # killprocess 75484 00:17:49.873 21:40:10 -- common/autotest_common.sh@936 -- # '[' -z 75484 ']' 00:17:49.873 21:40:10 -- common/autotest_common.sh@940 -- # kill -0 75484 00:17:49.873 21:40:10 -- common/autotest_common.sh@941 -- # uname 00:17:49.873 21:40:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.873 21:40:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75484 00:17:49.873 killing process with pid 75484 00:17:49.873 21:40:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:49.873 21:40:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:49.873 21:40:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75484' 00:17:49.873 21:40:10 -- common/autotest_common.sh@955 -- # kill 75484 00:17:49.873 21:40:10 -- common/autotest_common.sh@960 -- # wait 75484 00:17:49.873 [2024-12-06 21:40:10.322727] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:49.873 [2024-12-06 21:40:10.322846] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.250 ************************************ 00:17:51.250 END TEST raid_state_function_test 00:17:51.250 ************************************ 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:51.250 00:17:51.250 real 0m11.664s 00:17:51.250 user 0m19.527s 00:17:51.250 sys 0m1.727s 00:17:51.250 21:40:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:51.250 21:40:11 -- common/autotest_common.sh@10 -- # set +x 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:17:51.250 21:40:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:51.250 21:40:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.250 21:40:11 -- common/autotest_common.sh@10 -- # set +x 00:17:51.250 ************************************ 00:17:51.250 START TEST raid_state_function_test_sb 00:17:51.250 ************************************ 00:17:51.250 21:40:11 -- common/autotest_common.sh@1114 -- # raid_state_function_test concat 4 true 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:51.250 Process raid pid: 75882 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=75882 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 75882' 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 75882 /var/tmp/spdk-raid.sock 00:17:51.250 21:40:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:51.250 21:40:11 -- common/autotest_common.sh@829 -- # '[' -z 75882 ']' 00:17:51.250 21:40:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.250 21:40:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.250 21:40:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.250 21:40:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.250 21:40:11 -- common/autotest_common.sh@10 -- # set +x 00:17:51.250 [2024-12-06 21:40:11.489497] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:51.250 [2024-12-06 21:40:11.489646] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.250 [2024-12-06 21:40:11.660185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.509 [2024-12-06 21:40:11.826747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.509 [2024-12-06 21:40:11.992563] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.076 21:40:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.076 21:40:12 -- common/autotest_common.sh@862 -- # return 0 00:17:52.076 21:40:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:52.335 [2024-12-06 21:40:12.655883] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:52.335 [2024-12-06 21:40:12.655952] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:52.335 [2024-12-06 21:40:12.655967] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.335 [2024-12-06 21:40:12.655980] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.335 [2024-12-06 21:40:12.655988] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:52.335 [2024-12-06 21:40:12.656000] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:52.335 [2024-12-06 21:40:12.656008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:52.336 [2024-12-06 21:40:12.656020] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.336 21:40:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.594 21:40:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:52.594 "name": "Existed_Raid", 00:17:52.594 "uuid": "ea5413b0-0b3d-42b5-b8a6-383ffc79c616", 00:17:52.594 "strip_size_kb": 64, 00:17:52.594 "state": "configuring", 00:17:52.594 "raid_level": "concat", 00:17:52.594 "superblock": true, 00:17:52.594 "num_base_bdevs": 4, 00:17:52.594 "num_base_bdevs_discovered": 0, 00:17:52.594 "num_base_bdevs_operational": 4, 00:17:52.594 "base_bdevs_list": [ 00:17:52.594 { 00:17:52.594 "name": "BaseBdev1", 00:17:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.594 "is_configured": false, 00:17:52.594 "data_offset": 0, 00:17:52.594 "data_size": 0 00:17:52.594 }, 00:17:52.594 { 00:17:52.594 "name": "BaseBdev2", 00:17:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.594 "is_configured": false, 00:17:52.594 "data_offset": 0, 00:17:52.594 "data_size": 0 00:17:52.594 }, 00:17:52.594 { 00:17:52.594 "name": "BaseBdev3", 00:17:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.594 "is_configured": false, 00:17:52.594 "data_offset": 0, 00:17:52.594 "data_size": 0 00:17:52.594 }, 00:17:52.594 { 00:17:52.594 "name": "BaseBdev4", 00:17:52.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.594 "is_configured": false, 00:17:52.594 "data_offset": 0, 00:17:52.594 "data_size": 0 00:17:52.594 } 00:17:52.594 ] 00:17:52.594 }' 00:17:52.594 21:40:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:52.594 21:40:12 -- common/autotest_common.sh@10 -- # set +x 00:17:52.853 21:40:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:53.112 [2024-12-06 21:40:13.403914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:53.112 [2024-12-06 21:40:13.403958] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:17:53.112 21:40:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:53.372 [2024-12-06 21:40:13.612158] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:53.372 [2024-12-06 21:40:13.612243] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:53.372 [2024-12-06 21:40:13.612257] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:53.372 [2024-12-06 21:40:13.612272] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:53.372 [2024-12-06 21:40:13.612281] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:53.372 [2024-12-06 21:40:13.612293] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:53.372 [2024-12-06 21:40:13.612302] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:53.372 [2024-12-06 21:40:13.612329] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:53.372 21:40:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:53.631 [2024-12-06 21:40:13.890169] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:53.631 BaseBdev1 00:17:53.631 21:40:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:53.631 21:40:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:53.631 21:40:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:53.631 21:40:13 -- common/autotest_common.sh@899 -- # local i 00:17:53.631 21:40:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:53.631 21:40:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:53.631 21:40:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:53.631 21:40:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:53.891 [ 00:17:53.891 { 00:17:53.891 "name": "BaseBdev1", 00:17:53.891 "aliases": [ 00:17:53.891 "eb08c88e-981b-4120-ab80-478f957966c1" 00:17:53.891 ], 00:17:53.891 "product_name": "Malloc disk", 00:17:53.891 "block_size": 512, 00:17:53.891 "num_blocks": 65536, 00:17:53.891 "uuid": "eb08c88e-981b-4120-ab80-478f957966c1", 00:17:53.891 "assigned_rate_limits": { 00:17:53.891 "rw_ios_per_sec": 0, 00:17:53.891 "rw_mbytes_per_sec": 0, 00:17:53.891 "r_mbytes_per_sec": 0, 00:17:53.891 "w_mbytes_per_sec": 0 00:17:53.891 }, 00:17:53.891 "claimed": true, 00:17:53.891 "claim_type": "exclusive_write", 00:17:53.891 "zoned": false, 00:17:53.891 "supported_io_types": { 00:17:53.891 "read": true, 00:17:53.891 "write": true, 00:17:53.891 "unmap": true, 00:17:53.891 "write_zeroes": true, 00:17:53.891 "flush": true, 00:17:53.891 "reset": true, 00:17:53.891 "compare": false, 00:17:53.891 "compare_and_write": false, 00:17:53.891 "abort": true, 00:17:53.891 "nvme_admin": false, 00:17:53.891 "nvme_io": false 00:17:53.891 }, 00:17:53.891 "memory_domains": [ 00:17:53.891 { 00:17:53.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.891 "dma_device_type": 2 00:17:53.891 } 00:17:53.891 ], 00:17:53.891 "driver_specific": {} 00:17:53.891 } 00:17:53.891 ] 00:17:53.891 21:40:14 -- common/autotest_common.sh@905 -- # return 0 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.891 21:40:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:54.150 21:40:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:54.150 "name": "Existed_Raid", 00:17:54.150 "uuid": "05196b45-c941-4547-b190-4733b25fbb3e", 00:17:54.150 "strip_size_kb": 64, 00:17:54.150 "state": "configuring", 00:17:54.150 "raid_level": "concat", 00:17:54.150 "superblock": true, 00:17:54.150 "num_base_bdevs": 4, 00:17:54.150 "num_base_bdevs_discovered": 1, 00:17:54.150 "num_base_bdevs_operational": 4, 00:17:54.150 "base_bdevs_list": [ 00:17:54.150 { 00:17:54.150 "name": "BaseBdev1", 00:17:54.150 "uuid": "eb08c88e-981b-4120-ab80-478f957966c1", 00:17:54.150 "is_configured": true, 00:17:54.150 "data_offset": 2048, 00:17:54.150 "data_size": 63488 00:17:54.150 }, 00:17:54.150 { 00:17:54.150 "name": "BaseBdev2", 00:17:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.150 "is_configured": false, 00:17:54.150 "data_offset": 0, 00:17:54.150 "data_size": 0 00:17:54.150 }, 00:17:54.150 { 00:17:54.150 "name": "BaseBdev3", 00:17:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.150 "is_configured": false, 00:17:54.150 "data_offset": 0, 00:17:54.150 "data_size": 0 00:17:54.150 }, 00:17:54.150 { 00:17:54.150 "name": "BaseBdev4", 00:17:54.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.150 "is_configured": false, 00:17:54.150 "data_offset": 0, 00:17:54.150 "data_size": 0 00:17:54.150 } 00:17:54.150 ] 00:17:54.150 }' 00:17:54.150 21:40:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:54.150 21:40:14 -- common/autotest_common.sh@10 -- # set +x 00:17:54.408 21:40:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:54.666 [2024-12-06 21:40:15.098506] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:54.666 [2024-12-06 21:40:15.098558] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:17:54.666 21:40:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:54.666 21:40:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:54.924 21:40:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:55.182 BaseBdev1 00:17:55.182 21:40:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:55.182 21:40:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:55.182 21:40:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:55.182 21:40:15 -- common/autotest_common.sh@899 -- # local i 00:17:55.182 21:40:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:55.182 21:40:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:55.182 21:40:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:55.439 21:40:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:55.697 [ 00:17:55.697 { 00:17:55.697 "name": "BaseBdev1", 00:17:55.697 "aliases": [ 00:17:55.697 "705bf8f0-389d-46b6-898c-db88af0f4d65" 00:17:55.697 ], 00:17:55.697 "product_name": "Malloc disk", 00:17:55.697 "block_size": 512, 00:17:55.697 "num_blocks": 65536, 00:17:55.697 "uuid": "705bf8f0-389d-46b6-898c-db88af0f4d65", 00:17:55.697 "assigned_rate_limits": { 00:17:55.697 "rw_ios_per_sec": 0, 00:17:55.697 "rw_mbytes_per_sec": 0, 00:17:55.697 "r_mbytes_per_sec": 0, 00:17:55.697 "w_mbytes_per_sec": 0 00:17:55.697 }, 00:17:55.697 "claimed": false, 00:17:55.697 "zoned": false, 00:17:55.697 "supported_io_types": { 00:17:55.697 "read": true, 00:17:55.697 "write": true, 00:17:55.697 "unmap": true, 00:17:55.697 "write_zeroes": true, 00:17:55.697 "flush": true, 00:17:55.697 "reset": true, 00:17:55.697 "compare": false, 00:17:55.697 "compare_and_write": false, 00:17:55.697 "abort": true, 00:17:55.697 "nvme_admin": false, 00:17:55.697 "nvme_io": false 00:17:55.697 }, 00:17:55.697 "memory_domains": [ 00:17:55.697 { 00:17:55.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.697 "dma_device_type": 2 00:17:55.697 } 00:17:55.697 ], 00:17:55.697 "driver_specific": {} 00:17:55.697 } 00:17:55.697 ] 00:17:55.697 21:40:16 -- common/autotest_common.sh@905 -- # return 0 00:17:55.697 21:40:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:55.955 [2024-12-06 21:40:16.265301] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.955 [2024-12-06 21:40:16.267229] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:55.955 [2024-12-06 21:40:16.267294] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:55.955 [2024-12-06 21:40:16.267308] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:55.955 [2024-12-06 21:40:16.267322] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:55.955 [2024-12-06 21:40:16.267331] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:55.955 [2024-12-06 21:40:16.267344] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.955 21:40:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:56.214 21:40:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:56.214 "name": "Existed_Raid", 00:17:56.214 "uuid": "cb776c3b-2a51-46a8-89ac-eaf96dc1b405", 00:17:56.214 "strip_size_kb": 64, 00:17:56.214 "state": "configuring", 00:17:56.214 "raid_level": "concat", 00:17:56.214 "superblock": true, 00:17:56.214 "num_base_bdevs": 4, 00:17:56.214 "num_base_bdevs_discovered": 1, 00:17:56.214 "num_base_bdevs_operational": 4, 00:17:56.214 "base_bdevs_list": [ 00:17:56.214 { 00:17:56.214 "name": "BaseBdev1", 00:17:56.214 "uuid": "705bf8f0-389d-46b6-898c-db88af0f4d65", 00:17:56.214 "is_configured": true, 00:17:56.214 "data_offset": 2048, 00:17:56.214 "data_size": 63488 00:17:56.214 }, 00:17:56.214 { 00:17:56.214 "name": "BaseBdev2", 00:17:56.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.214 "is_configured": false, 00:17:56.214 "data_offset": 0, 00:17:56.214 "data_size": 0 00:17:56.214 }, 00:17:56.214 { 00:17:56.214 "name": "BaseBdev3", 00:17:56.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.214 "is_configured": false, 00:17:56.214 "data_offset": 0, 00:17:56.214 "data_size": 0 00:17:56.214 }, 00:17:56.214 { 00:17:56.214 "name": "BaseBdev4", 00:17:56.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.214 "is_configured": false, 00:17:56.214 "data_offset": 0, 00:17:56.214 "data_size": 0 00:17:56.214 } 00:17:56.214 ] 00:17:56.214 }' 00:17:56.214 21:40:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:56.214 21:40:16 -- common/autotest_common.sh@10 -- # set +x 00:17:56.473 21:40:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:56.732 [2024-12-06 21:40:17.086269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.732 BaseBdev2 00:17:56.732 21:40:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:56.732 21:40:17 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:56.732 21:40:17 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:56.732 21:40:17 -- common/autotest_common.sh@899 -- # local i 00:17:56.732 21:40:17 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:56.732 21:40:17 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:56.732 21:40:17 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:56.991 21:40:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:57.250 [ 00:17:57.250 { 00:17:57.250 "name": "BaseBdev2", 00:17:57.250 "aliases": [ 00:17:57.250 "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453" 00:17:57.250 ], 00:17:57.250 "product_name": "Malloc disk", 00:17:57.250 "block_size": 512, 00:17:57.250 "num_blocks": 65536, 00:17:57.250 "uuid": "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453", 00:17:57.250 "assigned_rate_limits": { 00:17:57.250 "rw_ios_per_sec": 0, 00:17:57.250 "rw_mbytes_per_sec": 0, 00:17:57.250 "r_mbytes_per_sec": 0, 00:17:57.250 "w_mbytes_per_sec": 0 00:17:57.250 }, 00:17:57.250 "claimed": true, 00:17:57.250 "claim_type": "exclusive_write", 00:17:57.250 "zoned": false, 00:17:57.250 "supported_io_types": { 00:17:57.250 "read": true, 00:17:57.250 "write": true, 00:17:57.250 "unmap": true, 00:17:57.250 "write_zeroes": true, 00:17:57.250 "flush": true, 00:17:57.250 "reset": true, 00:17:57.250 "compare": false, 00:17:57.250 "compare_and_write": false, 00:17:57.250 "abort": true, 00:17:57.250 "nvme_admin": false, 00:17:57.250 "nvme_io": false 00:17:57.250 }, 00:17:57.250 "memory_domains": [ 00:17:57.250 { 00:17:57.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:57.250 "dma_device_type": 2 00:17:57.250 } 00:17:57.250 ], 00:17:57.250 "driver_specific": {} 00:17:57.250 } 00:17:57.250 ] 00:17:57.250 21:40:17 -- common/autotest_common.sh@905 -- # return 0 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:57.250 21:40:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.509 21:40:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:57.509 "name": "Existed_Raid", 00:17:57.509 "uuid": "cb776c3b-2a51-46a8-89ac-eaf96dc1b405", 00:17:57.509 "strip_size_kb": 64, 00:17:57.509 "state": "configuring", 00:17:57.509 "raid_level": "concat", 00:17:57.509 "superblock": true, 00:17:57.509 "num_base_bdevs": 4, 00:17:57.509 "num_base_bdevs_discovered": 2, 00:17:57.509 "num_base_bdevs_operational": 4, 00:17:57.509 "base_bdevs_list": [ 00:17:57.509 { 00:17:57.509 "name": "BaseBdev1", 00:17:57.509 "uuid": "705bf8f0-389d-46b6-898c-db88af0f4d65", 00:17:57.509 "is_configured": true, 00:17:57.509 "data_offset": 2048, 00:17:57.509 "data_size": 63488 00:17:57.509 }, 00:17:57.509 { 00:17:57.509 "name": "BaseBdev2", 00:17:57.509 "uuid": "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453", 00:17:57.509 "is_configured": true, 00:17:57.509 "data_offset": 2048, 00:17:57.509 "data_size": 63488 00:17:57.509 }, 00:17:57.509 { 00:17:57.509 "name": "BaseBdev3", 00:17:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.509 "is_configured": false, 00:17:57.509 "data_offset": 0, 00:17:57.509 "data_size": 0 00:17:57.509 }, 00:17:57.509 { 00:17:57.509 "name": "BaseBdev4", 00:17:57.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.509 "is_configured": false, 00:17:57.509 "data_offset": 0, 00:17:57.509 "data_size": 0 00:17:57.509 } 00:17:57.509 ] 00:17:57.509 }' 00:17:57.509 21:40:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:57.509 21:40:17 -- common/autotest_common.sh@10 -- # set +x 00:17:57.768 21:40:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:58.027 [2024-12-06 21:40:18.384251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.027 BaseBdev3 00:17:58.027 21:40:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:58.027 21:40:18 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:17:58.027 21:40:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:58.027 21:40:18 -- common/autotest_common.sh@899 -- # local i 00:17:58.027 21:40:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:58.027 21:40:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:58.027 21:40:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:58.307 21:40:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:58.577 [ 00:17:58.577 { 00:17:58.577 "name": "BaseBdev3", 00:17:58.577 "aliases": [ 00:17:58.577 "78c251bf-b583-4c5f-91d7-830b6f3a70c8" 00:17:58.577 ], 00:17:58.577 "product_name": "Malloc disk", 00:17:58.577 "block_size": 512, 00:17:58.577 "num_blocks": 65536, 00:17:58.577 "uuid": "78c251bf-b583-4c5f-91d7-830b6f3a70c8", 00:17:58.577 "assigned_rate_limits": { 00:17:58.577 "rw_ios_per_sec": 0, 00:17:58.577 "rw_mbytes_per_sec": 0, 00:17:58.577 "r_mbytes_per_sec": 0, 00:17:58.577 "w_mbytes_per_sec": 0 00:17:58.577 }, 00:17:58.577 "claimed": true, 00:17:58.577 "claim_type": "exclusive_write", 00:17:58.577 "zoned": false, 00:17:58.577 "supported_io_types": { 00:17:58.577 "read": true, 00:17:58.577 "write": true, 00:17:58.577 "unmap": true, 00:17:58.577 "write_zeroes": true, 00:17:58.577 "flush": true, 00:17:58.577 "reset": true, 00:17:58.577 "compare": false, 00:17:58.577 "compare_and_write": false, 00:17:58.577 "abort": true, 00:17:58.577 "nvme_admin": false, 00:17:58.577 "nvme_io": false 00:17:58.577 }, 00:17:58.577 "memory_domains": [ 00:17:58.577 { 00:17:58.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:58.577 "dma_device_type": 2 00:17:58.577 } 00:17:58.577 ], 00:17:58.577 "driver_specific": {} 00:17:58.577 } 00:17:58.577 ] 00:17:58.577 21:40:18 -- common/autotest_common.sh@905 -- # return 0 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.577 21:40:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.578 21:40:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:58.836 21:40:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:58.836 "name": "Existed_Raid", 00:17:58.836 "uuid": "cb776c3b-2a51-46a8-89ac-eaf96dc1b405", 00:17:58.836 "strip_size_kb": 64, 00:17:58.836 "state": "configuring", 00:17:58.836 "raid_level": "concat", 00:17:58.836 "superblock": true, 00:17:58.836 "num_base_bdevs": 4, 00:17:58.836 "num_base_bdevs_discovered": 3, 00:17:58.836 "num_base_bdevs_operational": 4, 00:17:58.837 "base_bdevs_list": [ 00:17:58.837 { 00:17:58.837 "name": "BaseBdev1", 00:17:58.837 "uuid": "705bf8f0-389d-46b6-898c-db88af0f4d65", 00:17:58.837 "is_configured": true, 00:17:58.837 "data_offset": 2048, 00:17:58.837 "data_size": 63488 00:17:58.837 }, 00:17:58.837 { 00:17:58.837 "name": "BaseBdev2", 00:17:58.837 "uuid": "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453", 00:17:58.837 "is_configured": true, 00:17:58.837 "data_offset": 2048, 00:17:58.837 "data_size": 63488 00:17:58.837 }, 00:17:58.837 { 00:17:58.837 "name": "BaseBdev3", 00:17:58.837 "uuid": "78c251bf-b583-4c5f-91d7-830b6f3a70c8", 00:17:58.837 "is_configured": true, 00:17:58.837 "data_offset": 2048, 00:17:58.837 "data_size": 63488 00:17:58.837 }, 00:17:58.837 { 00:17:58.837 "name": "BaseBdev4", 00:17:58.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.837 "is_configured": false, 00:17:58.837 "data_offset": 0, 00:17:58.837 "data_size": 0 00:17:58.837 } 00:17:58.837 ] 00:17:58.837 }' 00:17:58.837 21:40:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:58.837 21:40:19 -- common/autotest_common.sh@10 -- # set +x 00:17:59.096 21:40:19 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:59.355 [2024-12-06 21:40:19.642543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:59.355 [2024-12-06 21:40:19.643070] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:17:59.355 [2024-12-06 21:40:19.643203] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:59.355 [2024-12-06 21:40:19.643408] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:17:59.355 [2024-12-06 21:40:19.643841] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:17:59.355 BaseBdev4 00:17:59.355 [2024-12-06 21:40:19.644021] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:17:59.355 [2024-12-06 21:40:19.644195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.355 21:40:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:59.355 21:40:19 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:17:59.355 21:40:19 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:59.355 21:40:19 -- common/autotest_common.sh@899 -- # local i 00:17:59.355 21:40:19 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:59.355 21:40:19 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:59.355 21:40:19 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:59.613 21:40:19 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:59.871 [ 00:17:59.871 { 00:17:59.871 "name": "BaseBdev4", 00:17:59.871 "aliases": [ 00:17:59.871 "fda58dd4-e9e9-4add-b282-5163f3c184a6" 00:17:59.871 ], 00:17:59.871 "product_name": "Malloc disk", 00:17:59.871 "block_size": 512, 00:17:59.871 "num_blocks": 65536, 00:17:59.871 "uuid": "fda58dd4-e9e9-4add-b282-5163f3c184a6", 00:17:59.871 "assigned_rate_limits": { 00:17:59.871 "rw_ios_per_sec": 0, 00:17:59.871 "rw_mbytes_per_sec": 0, 00:17:59.871 "r_mbytes_per_sec": 0, 00:17:59.871 "w_mbytes_per_sec": 0 00:17:59.871 }, 00:17:59.871 "claimed": true, 00:17:59.871 "claim_type": "exclusive_write", 00:17:59.871 "zoned": false, 00:17:59.871 "supported_io_types": { 00:17:59.871 "read": true, 00:17:59.871 "write": true, 00:17:59.871 "unmap": true, 00:17:59.871 "write_zeroes": true, 00:17:59.871 "flush": true, 00:17:59.871 "reset": true, 00:17:59.871 "compare": false, 00:17:59.871 "compare_and_write": false, 00:17:59.871 "abort": true, 00:17:59.871 "nvme_admin": false, 00:17:59.871 "nvme_io": false 00:17:59.871 }, 00:17:59.871 "memory_domains": [ 00:17:59.871 { 00:17:59.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:59.871 "dma_device_type": 2 00:17:59.871 } 00:17:59.871 ], 00:17:59.871 "driver_specific": {} 00:17:59.871 } 00:17:59.871 ] 00:17:59.871 21:40:20 -- common/autotest_common.sh@905 -- # return 0 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:59.871 21:40:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.871 "name": "Existed_Raid", 00:17:59.871 "uuid": "cb776c3b-2a51-46a8-89ac-eaf96dc1b405", 00:17:59.871 "strip_size_kb": 64, 00:17:59.871 "state": "online", 00:17:59.871 "raid_level": "concat", 00:17:59.871 "superblock": true, 00:17:59.871 "num_base_bdevs": 4, 00:17:59.871 "num_base_bdevs_discovered": 4, 00:17:59.871 "num_base_bdevs_operational": 4, 00:17:59.871 "base_bdevs_list": [ 00:17:59.871 { 00:17:59.871 "name": "BaseBdev1", 00:17:59.871 "uuid": "705bf8f0-389d-46b6-898c-db88af0f4d65", 00:17:59.871 "is_configured": true, 00:17:59.871 "data_offset": 2048, 00:17:59.871 "data_size": 63488 00:17:59.871 }, 00:17:59.871 { 00:17:59.871 "name": "BaseBdev2", 00:17:59.871 "uuid": "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453", 00:17:59.871 "is_configured": true, 00:17:59.872 "data_offset": 2048, 00:17:59.872 "data_size": 63488 00:17:59.872 }, 00:17:59.872 { 00:17:59.872 "name": "BaseBdev3", 00:17:59.872 "uuid": "78c251bf-b583-4c5f-91d7-830b6f3a70c8", 00:17:59.872 "is_configured": true, 00:17:59.872 "data_offset": 2048, 00:17:59.872 "data_size": 63488 00:17:59.872 }, 00:17:59.872 { 00:17:59.872 "name": "BaseBdev4", 00:17:59.872 "uuid": "fda58dd4-e9e9-4add-b282-5163f3c184a6", 00:17:59.872 "is_configured": true, 00:17:59.872 "data_offset": 2048, 00:17:59.872 "data_size": 63488 00:17:59.872 } 00:17:59.872 ] 00:17:59.872 }' 00:17:59.872 21:40:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.872 21:40:20 -- common/autotest_common.sh@10 -- # set +x 00:18:00.440 21:40:20 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:00.440 [2024-12-06 21:40:20.887312] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:00.440 [2024-12-06 21:40:20.887344] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:00.440 [2024-12-06 21:40:20.887440] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.699 21:40:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:00.958 21:40:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.958 "name": "Existed_Raid", 00:18:00.958 "uuid": "cb776c3b-2a51-46a8-89ac-eaf96dc1b405", 00:18:00.958 "strip_size_kb": 64, 00:18:00.958 "state": "offline", 00:18:00.958 "raid_level": "concat", 00:18:00.958 "superblock": true, 00:18:00.958 "num_base_bdevs": 4, 00:18:00.958 "num_base_bdevs_discovered": 3, 00:18:00.958 "num_base_bdevs_operational": 3, 00:18:00.958 "base_bdevs_list": [ 00:18:00.958 { 00:18:00.958 "name": null, 00:18:00.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.958 "is_configured": false, 00:18:00.959 "data_offset": 2048, 00:18:00.959 "data_size": 63488 00:18:00.959 }, 00:18:00.959 { 00:18:00.959 "name": "BaseBdev2", 00:18:00.959 "uuid": "dd0fa70b-2c9c-45b1-8f6a-c3a9729a0453", 00:18:00.959 "is_configured": true, 00:18:00.959 "data_offset": 2048, 00:18:00.959 "data_size": 63488 00:18:00.959 }, 00:18:00.959 { 00:18:00.959 "name": "BaseBdev3", 00:18:00.959 "uuid": "78c251bf-b583-4c5f-91d7-830b6f3a70c8", 00:18:00.959 "is_configured": true, 00:18:00.959 "data_offset": 2048, 00:18:00.959 "data_size": 63488 00:18:00.959 }, 00:18:00.959 { 00:18:00.959 "name": "BaseBdev4", 00:18:00.959 "uuid": "fda58dd4-e9e9-4add-b282-5163f3c184a6", 00:18:00.959 "is_configured": true, 00:18:00.959 "data_offset": 2048, 00:18:00.959 "data_size": 63488 00:18:00.959 } 00:18:00.959 ] 00:18:00.959 }' 00:18:00.959 21:40:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.959 21:40:21 -- common/autotest_common.sh@10 -- # set +x 00:18:01.217 21:40:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:01.217 21:40:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.217 21:40:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.217 21:40:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.476 21:40:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.476 21:40:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.476 21:40:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:01.476 [2024-12-06 21:40:21.962500] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:01.735 21:40:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:01.735 21:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:01.735 21:40:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.735 21:40:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:01.994 21:40:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:01.994 21:40:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:01.994 21:40:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:01.994 [2024-12-06 21:40:22.471972] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:02.253 21:40:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.253 21:40:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.253 21:40:22 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.253 21:40:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:02.512 21:40:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:02.512 21:40:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:02.512 21:40:22 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:02.512 [2024-12-06 21:40:22.951792] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:02.512 [2024-12-06 21:40:22.951852] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:02.771 21:40:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:02.771 21:40:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:02.771 21:40:23 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:02.771 21:40:23 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.030 21:40:23 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:03.030 21:40:23 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:03.030 21:40:23 -- bdev/bdev_raid.sh@287 -- # killprocess 75882 00:18:03.030 21:40:23 -- common/autotest_common.sh@936 -- # '[' -z 75882 ']' 00:18:03.030 21:40:23 -- common/autotest_common.sh@940 -- # kill -0 75882 00:18:03.030 21:40:23 -- common/autotest_common.sh@941 -- # uname 00:18:03.030 21:40:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.030 21:40:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75882 00:18:03.030 killing process with pid 75882 00:18:03.030 21:40:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.030 21:40:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.030 21:40:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75882' 00:18:03.030 21:40:23 -- common/autotest_common.sh@955 -- # kill 75882 00:18:03.030 [2024-12-06 21:40:23.314512] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:03.030 21:40:23 -- common/autotest_common.sh@960 -- # wait 75882 00:18:03.030 [2024-12-06 21:40:23.314629] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:03.967 ************************************ 00:18:03.967 END TEST raid_state_function_test_sb 00:18:03.967 ************************************ 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:03.967 00:18:03.967 real 0m12.940s 00:18:03.967 user 0m21.787s 00:18:03.967 sys 0m1.892s 00:18:03.967 21:40:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:03.967 21:40:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:03.967 21:40:24 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:03.967 21:40:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.967 21:40:24 -- common/autotest_common.sh@10 -- # set +x 00:18:03.967 ************************************ 00:18:03.967 START TEST raid_superblock_test 00:18:03.967 ************************************ 00:18:03.967 21:40:24 -- common/autotest_common.sh@1114 -- # raid_superblock_test concat 4 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:03.967 21:40:24 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:03.968 21:40:24 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:03.968 21:40:24 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:03.968 21:40:24 -- bdev/bdev_raid.sh@357 -- # raid_pid=76284 00:18:03.968 21:40:24 -- bdev/bdev_raid.sh@358 -- # waitforlisten 76284 /var/tmp/spdk-raid.sock 00:18:03.968 21:40:24 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:03.968 21:40:24 -- common/autotest_common.sh@829 -- # '[' -z 76284 ']' 00:18:03.968 21:40:24 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:03.968 21:40:24 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:03.968 21:40:24 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:03.968 21:40:24 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.968 21:40:24 -- common/autotest_common.sh@10 -- # set +x 00:18:04.226 [2024-12-06 21:40:24.491357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:04.227 [2024-12-06 21:40:24.491762] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76284 ] 00:18:04.227 [2024-12-06 21:40:24.662932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.486 [2024-12-06 21:40:24.827782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.744 [2024-12-06 21:40:24.994888] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.002 21:40:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.002 21:40:25 -- common/autotest_common.sh@862 -- # return 0 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.002 21:40:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:05.260 malloc1 00:18:05.260 21:40:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:05.518 [2024-12-06 21:40:25.780990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:05.518 [2024-12-06 21:40:25.781258] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.518 [2024-12-06 21:40:25.781306] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:05.518 [2024-12-06 21:40:25.781321] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.518 [2024-12-06 21:40:25.783617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.518 [2024-12-06 21:40:25.783657] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:05.518 pt1 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:05.518 21:40:25 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:05.778 malloc2 00:18:05.778 21:40:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:05.778 [2024-12-06 21:40:26.263108] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:05.778 [2024-12-06 21:40:26.263380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.778 [2024-12-06 21:40:26.263470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:05.778 [2024-12-06 21:40:26.263616] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.778 [2024-12-06 21:40:26.266067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.778 [2024-12-06 21:40:26.266251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:05.778 pt2 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.037 21:40:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:06.037 malloc3 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:06.297 [2024-12-06 21:40:26.722469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:06.297 [2024-12-06 21:40:26.722565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.297 [2024-12-06 21:40:26.722599] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:18:06.297 [2024-12-06 21:40:26.722613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.297 [2024-12-06 21:40:26.725117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.297 [2024-12-06 21:40:26.725172] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:06.297 pt3 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:06.297 21:40:26 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:06.557 malloc4 00:18:06.557 21:40:26 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:06.816 [2024-12-06 21:40:27.149687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:06.816 [2024-12-06 21:40:27.149754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.816 [2024-12-06 21:40:27.149792] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:06.816 [2024-12-06 21:40:27.149807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.816 [2024-12-06 21:40:27.152270] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.816 [2024-12-06 21:40:27.152310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:06.816 pt4 00:18:06.816 21:40:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:06.816 21:40:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:06.816 21:40:27 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:07.076 [2024-12-06 21:40:27.361820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:07.076 [2024-12-06 21:40:27.363942] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.076 [2024-12-06 21:40:27.364181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:07.076 [2024-12-06 21:40:27.364391] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:07.076 [2024-12-06 21:40:27.364804] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:07.076 [2024-12-06 21:40:27.364970] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:07.076 [2024-12-06 21:40:27.365137] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:07.076 [2024-12-06 21:40:27.365598] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:07.076 [2024-12-06 21:40:27.365749] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:07.076 [2024-12-06 21:40:27.366079] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.076 21:40:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.336 21:40:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:07.336 "name": "raid_bdev1", 00:18:07.336 "uuid": "eec8d592-8e3a-45a6-a3cc-31f9dc66c730", 00:18:07.336 "strip_size_kb": 64, 00:18:07.336 "state": "online", 00:18:07.336 "raid_level": "concat", 00:18:07.336 "superblock": true, 00:18:07.336 "num_base_bdevs": 4, 00:18:07.336 "num_base_bdevs_discovered": 4, 00:18:07.336 "num_base_bdevs_operational": 4, 00:18:07.336 "base_bdevs_list": [ 00:18:07.336 { 00:18:07.336 "name": "pt1", 00:18:07.336 "uuid": "e2522855-da87-595d-a13c-6527337cfc0f", 00:18:07.336 "is_configured": true, 00:18:07.336 "data_offset": 2048, 00:18:07.336 "data_size": 63488 00:18:07.336 }, 00:18:07.336 { 00:18:07.336 "name": "pt2", 00:18:07.336 "uuid": "c01f5bef-250d-53e5-9dcb-b7665ee5c9e6", 00:18:07.336 "is_configured": true, 00:18:07.336 "data_offset": 2048, 00:18:07.336 "data_size": 63488 00:18:07.336 }, 00:18:07.336 { 00:18:07.336 "name": "pt3", 00:18:07.336 "uuid": "d156d6bd-c7c0-5210-ae0a-487013cd7d0b", 00:18:07.336 "is_configured": true, 00:18:07.336 "data_offset": 2048, 00:18:07.336 "data_size": 63488 00:18:07.336 }, 00:18:07.336 { 00:18:07.336 "name": "pt4", 00:18:07.336 "uuid": "d6fd37bb-90d7-523b-a8e8-f9f8747da1d0", 00:18:07.336 "is_configured": true, 00:18:07.336 "data_offset": 2048, 00:18:07.336 "data_size": 63488 00:18:07.336 } 00:18:07.336 ] 00:18:07.336 }' 00:18:07.336 21:40:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:07.336 21:40:27 -- common/autotest_common.sh@10 -- # set +x 00:18:07.595 21:40:27 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:07.595 21:40:27 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:07.855 [2024-12-06 21:40:28.166450] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:07.855 21:40:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=eec8d592-8e3a-45a6-a3cc-31f9dc66c730 00:18:07.855 21:40:28 -- bdev/bdev_raid.sh@380 -- # '[' -z eec8d592-8e3a-45a6-a3cc-31f9dc66c730 ']' 00:18:07.855 21:40:28 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.114 [2024-12-06 21:40:28.422281] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.114 [2024-12-06 21:40:28.422322] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.114 [2024-12-06 21:40:28.422437] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.114 [2024-12-06 21:40:28.422540] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.114 [2024-12-06 21:40:28.422575] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:08.114 21:40:28 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.114 21:40:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:08.372 21:40:28 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:08.373 21:40:28 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:08.373 21:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.373 21:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:08.373 21:40:28 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.373 21:40:28 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:08.631 21:40:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.631 21:40:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:08.890 21:40:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:08.890 21:40:29 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:09.148 21:40:29 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:09.148 21:40:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:09.408 21:40:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:09.408 21:40:29 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.408 21:40:29 -- common/autotest_common.sh@650 -- # local es=0 00:18:09.408 21:40:29 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.408 21:40:29 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.408 21:40:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.408 21:40:29 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.408 21:40:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.408 21:40:29 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.408 21:40:29 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:09.408 21:40:29 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:09.408 21:40:29 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:09.408 21:40:29 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:09.667 [2024-12-06 21:40:29.922601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:09.667 [2024-12-06 21:40:29.924639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:09.667 [2024-12-06 21:40:29.924916] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:09.667 [2024-12-06 21:40:29.924979] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:09.667 [2024-12-06 21:40:29.925048] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:09.667 [2024-12-06 21:40:29.925115] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:09.667 [2024-12-06 21:40:29.925163] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:09.667 [2024-12-06 21:40:29.925190] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:09.667 [2024-12-06 21:40:29.925227] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.667 [2024-12-06 21:40:29.925240] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:18:09.667 request: 00:18:09.667 { 00:18:09.667 "name": "raid_bdev1", 00:18:09.667 "raid_level": "concat", 00:18:09.667 "base_bdevs": [ 00:18:09.667 "malloc1", 00:18:09.667 "malloc2", 00:18:09.667 "malloc3", 00:18:09.667 "malloc4" 00:18:09.667 ], 00:18:09.667 "superblock": false, 00:18:09.667 "strip_size_kb": 64, 00:18:09.667 "method": "bdev_raid_create", 00:18:09.667 "req_id": 1 00:18:09.667 } 00:18:09.667 Got JSON-RPC error response 00:18:09.667 response: 00:18:09.667 { 00:18:09.667 "code": -17, 00:18:09.667 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:09.667 } 00:18:09.667 21:40:29 -- common/autotest_common.sh@653 -- # es=1 00:18:09.667 21:40:29 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:09.667 21:40:29 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:09.667 21:40:29 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:09.667 21:40:29 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.667 21:40:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.926 [2024-12-06 21:40:30.366686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.926 [2024-12-06 21:40:30.367029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.926 [2024-12-06 21:40:30.367105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:09.926 [2024-12-06 21:40:30.367221] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.926 [2024-12-06 21:40:30.369745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.926 [2024-12-06 21:40:30.369788] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.926 [2024-12-06 21:40:30.369909] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:09.926 [2024-12-06 21:40:30.369972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.926 pt1 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.926 21:40:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.184 21:40:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.184 "name": "raid_bdev1", 00:18:10.184 "uuid": "eec8d592-8e3a-45a6-a3cc-31f9dc66c730", 00:18:10.184 "strip_size_kb": 64, 00:18:10.184 "state": "configuring", 00:18:10.184 "raid_level": "concat", 00:18:10.184 "superblock": true, 00:18:10.184 "num_base_bdevs": 4, 00:18:10.184 "num_base_bdevs_discovered": 1, 00:18:10.184 "num_base_bdevs_operational": 4, 00:18:10.184 "base_bdevs_list": [ 00:18:10.184 { 00:18:10.184 "name": "pt1", 00:18:10.184 "uuid": "e2522855-da87-595d-a13c-6527337cfc0f", 00:18:10.184 "is_configured": true, 00:18:10.184 "data_offset": 2048, 00:18:10.184 "data_size": 63488 00:18:10.184 }, 00:18:10.184 { 00:18:10.184 "name": null, 00:18:10.184 "uuid": "c01f5bef-250d-53e5-9dcb-b7665ee5c9e6", 00:18:10.184 "is_configured": false, 00:18:10.184 "data_offset": 2048, 00:18:10.184 "data_size": 63488 00:18:10.184 }, 00:18:10.184 { 00:18:10.184 "name": null, 00:18:10.184 "uuid": "d156d6bd-c7c0-5210-ae0a-487013cd7d0b", 00:18:10.184 "is_configured": false, 00:18:10.184 "data_offset": 2048, 00:18:10.185 "data_size": 63488 00:18:10.185 }, 00:18:10.185 { 00:18:10.185 "name": null, 00:18:10.185 "uuid": "d6fd37bb-90d7-523b-a8e8-f9f8747da1d0", 00:18:10.185 "is_configured": false, 00:18:10.185 "data_offset": 2048, 00:18:10.185 "data_size": 63488 00:18:10.185 } 00:18:10.185 ] 00:18:10.185 }' 00:18:10.185 21:40:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.185 21:40:30 -- common/autotest_common.sh@10 -- # set +x 00:18:10.752 21:40:30 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:10.752 21:40:30 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:10.752 [2024-12-06 21:40:31.171258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:10.752 [2024-12-06 21:40:31.171343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.752 [2024-12-06 21:40:31.171378] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:10.752 [2024-12-06 21:40:31.171392] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.752 [2024-12-06 21:40:31.171981] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.752 [2024-12-06 21:40:31.172012] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:10.752 [2024-12-06 21:40:31.172140] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:10.752 [2024-12-06 21:40:31.172166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:10.752 pt2 00:18:10.752 21:40:31 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:11.010 [2024-12-06 21:40:31.399311] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.010 21:40:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.268 21:40:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:11.268 "name": "raid_bdev1", 00:18:11.268 "uuid": "eec8d592-8e3a-45a6-a3cc-31f9dc66c730", 00:18:11.268 "strip_size_kb": 64, 00:18:11.268 "state": "configuring", 00:18:11.268 "raid_level": "concat", 00:18:11.268 "superblock": true, 00:18:11.268 "num_base_bdevs": 4, 00:18:11.268 "num_base_bdevs_discovered": 1, 00:18:11.268 "num_base_bdevs_operational": 4, 00:18:11.268 "base_bdevs_list": [ 00:18:11.268 { 00:18:11.268 "name": "pt1", 00:18:11.268 "uuid": "e2522855-da87-595d-a13c-6527337cfc0f", 00:18:11.268 "is_configured": true, 00:18:11.268 "data_offset": 2048, 00:18:11.268 "data_size": 63488 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "name": null, 00:18:11.268 "uuid": "c01f5bef-250d-53e5-9dcb-b7665ee5c9e6", 00:18:11.268 "is_configured": false, 00:18:11.268 "data_offset": 2048, 00:18:11.268 "data_size": 63488 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "name": null, 00:18:11.268 "uuid": "d156d6bd-c7c0-5210-ae0a-487013cd7d0b", 00:18:11.268 "is_configured": false, 00:18:11.268 "data_offset": 2048, 00:18:11.268 "data_size": 63488 00:18:11.268 }, 00:18:11.268 { 00:18:11.268 "name": null, 00:18:11.268 "uuid": "d6fd37bb-90d7-523b-a8e8-f9f8747da1d0", 00:18:11.268 "is_configured": false, 00:18:11.268 "data_offset": 2048, 00:18:11.268 "data_size": 63488 00:18:11.268 } 00:18:11.268 ] 00:18:11.268 }' 00:18:11.268 21:40:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:11.268 21:40:31 -- common/autotest_common.sh@10 -- # set +x 00:18:11.527 21:40:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:11.527 21:40:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:11.527 21:40:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:11.786 [2024-12-06 21:40:32.143535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:11.786 [2024-12-06 21:40:32.144021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:11.786 [2024-12-06 21:40:32.144152] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:11.786 [2024-12-06 21:40:32.144238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:11.786 [2024-12-06 21:40:32.144944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:11.786 [2024-12-06 21:40:32.145191] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:11.786 [2024-12-06 21:40:32.145401] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:11.786 [2024-12-06 21:40:32.145456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:11.786 pt2 00:18:11.786 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:11.786 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:11.786 21:40:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:12.044 [2024-12-06 21:40:32.391602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:12.044 [2024-12-06 21:40:32.392226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.044 [2024-12-06 21:40:32.392370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:18:12.044 [2024-12-06 21:40:32.392505] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.044 [2024-12-06 21:40:32.393147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.044 [2024-12-06 21:40:32.393414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:12.044 [2024-12-06 21:40:32.393669] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:12.044 [2024-12-06 21:40:32.393864] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:12.044 pt3 00:18:12.044 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:12.044 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:12.044 21:40:32 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:12.303 [2024-12-06 21:40:32.643648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:12.303 [2024-12-06 21:40:32.643995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.303 [2024-12-06 21:40:32.644150] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:18:12.303 [2024-12-06 21:40:32.644271] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.303 [2024-12-06 21:40:32.644962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.303 [2024-12-06 21:40:32.645093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:12.303 [2024-12-06 21:40:32.645260] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:12.303 [2024-12-06 21:40:32.645314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:12.303 [2024-12-06 21:40:32.645465] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:18:12.303 [2024-12-06 21:40:32.645484] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:12.303 [2024-12-06 21:40:32.645614] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:12.303 [2024-12-06 21:40:32.646023] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:18:12.303 [2024-12-06 21:40:32.646046] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:18:12.303 [2024-12-06 21:40:32.646241] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.303 pt4 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.303 21:40:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.562 21:40:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.562 "name": "raid_bdev1", 00:18:12.562 "uuid": "eec8d592-8e3a-45a6-a3cc-31f9dc66c730", 00:18:12.562 "strip_size_kb": 64, 00:18:12.562 "state": "online", 00:18:12.562 "raid_level": "concat", 00:18:12.562 "superblock": true, 00:18:12.562 "num_base_bdevs": 4, 00:18:12.562 "num_base_bdevs_discovered": 4, 00:18:12.562 "num_base_bdevs_operational": 4, 00:18:12.562 "base_bdevs_list": [ 00:18:12.562 { 00:18:12.562 "name": "pt1", 00:18:12.562 "uuid": "e2522855-da87-595d-a13c-6527337cfc0f", 00:18:12.562 "is_configured": true, 00:18:12.562 "data_offset": 2048, 00:18:12.562 "data_size": 63488 00:18:12.562 }, 00:18:12.562 { 00:18:12.562 "name": "pt2", 00:18:12.562 "uuid": "c01f5bef-250d-53e5-9dcb-b7665ee5c9e6", 00:18:12.562 "is_configured": true, 00:18:12.562 "data_offset": 2048, 00:18:12.562 "data_size": 63488 00:18:12.562 }, 00:18:12.562 { 00:18:12.562 "name": "pt3", 00:18:12.562 "uuid": "d156d6bd-c7c0-5210-ae0a-487013cd7d0b", 00:18:12.562 "is_configured": true, 00:18:12.562 "data_offset": 2048, 00:18:12.562 "data_size": 63488 00:18:12.562 }, 00:18:12.562 { 00:18:12.562 "name": "pt4", 00:18:12.562 "uuid": "d6fd37bb-90d7-523b-a8e8-f9f8747da1d0", 00:18:12.562 "is_configured": true, 00:18:12.562 "data_offset": 2048, 00:18:12.562 "data_size": 63488 00:18:12.562 } 00:18:12.562 ] 00:18:12.562 }' 00:18:12.562 21:40:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.562 21:40:32 -- common/autotest_common.sh@10 -- # set +x 00:18:12.821 21:40:33 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:12.821 21:40:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:13.090 [2024-12-06 21:40:33.456121] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.091 21:40:33 -- bdev/bdev_raid.sh@430 -- # '[' eec8d592-8e3a-45a6-a3cc-31f9dc66c730 '!=' eec8d592-8e3a-45a6-a3cc-31f9dc66c730 ']' 00:18:13.091 21:40:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:13.091 21:40:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:13.091 21:40:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:13.091 21:40:33 -- bdev/bdev_raid.sh@511 -- # killprocess 76284 00:18:13.091 21:40:33 -- common/autotest_common.sh@936 -- # '[' -z 76284 ']' 00:18:13.091 21:40:33 -- common/autotest_common.sh@940 -- # kill -0 76284 00:18:13.091 21:40:33 -- common/autotest_common.sh@941 -- # uname 00:18:13.091 21:40:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.091 21:40:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76284 00:18:13.091 killing process with pid 76284 00:18:13.091 21:40:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.091 21:40:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.091 21:40:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76284' 00:18:13.091 21:40:33 -- common/autotest_common.sh@955 -- # kill 76284 00:18:13.091 [2024-12-06 21:40:33.500800] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:13.091 21:40:33 -- common/autotest_common.sh@960 -- # wait 76284 00:18:13.091 [2024-12-06 21:40:33.500886] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:13.091 [2024-12-06 21:40:33.500961] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:13.091 [2024-12-06 21:40:33.500974] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:18:13.366 [2024-12-06 21:40:33.801705] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:14.752 00:18:14.752 real 0m10.407s 00:18:14.752 user 0m17.256s 00:18:14.752 sys 0m1.425s 00:18:14.752 21:40:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:14.752 ************************************ 00:18:14.752 END TEST raid_superblock_test 00:18:14.752 ************************************ 00:18:14.752 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:14.752 21:40:34 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:14.752 21:40:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:14.752 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:18:14.752 ************************************ 00:18:14.752 START TEST raid_state_function_test 00:18:14.752 ************************************ 00:18:14.752 21:40:34 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 false 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:14.752 Process raid pid: 76581 00:18:14.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=76581 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76581' 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76581 /var/tmp/spdk-raid.sock 00:18:14.752 21:40:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:14.752 21:40:34 -- common/autotest_common.sh@829 -- # '[' -z 76581 ']' 00:18:14.752 21:40:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:14.752 21:40:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.752 21:40:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:14.752 21:40:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.752 21:40:34 -- common/autotest_common.sh@10 -- # set +x 00:18:14.752 [2024-12-06 21:40:34.946057] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:14.752 [2024-12-06 21:40:34.946789] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:14.752 [2024-12-06 21:40:35.112701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.010 [2024-12-06 21:40:35.282287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.010 [2024-12-06 21:40:35.445241] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:15.577 21:40:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.577 21:40:35 -- common/autotest_common.sh@862 -- # return 0 00:18:15.577 21:40:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:15.835 [2024-12-06 21:40:36.108936] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:15.835 [2024-12-06 21:40:36.109612] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:15.835 [2024-12-06 21:40:36.109670] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:15.835 [2024-12-06 21:40:36.109693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:15.835 [2024-12-06 21:40:36.109704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:15.835 [2024-12-06 21:40:36.109717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:15.835 [2024-12-06 21:40:36.109728] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:15.835 [2024-12-06 21:40:36.109740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.835 21:40:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.835 "name": "Existed_Raid", 00:18:15.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.835 "strip_size_kb": 0, 00:18:15.835 "state": "configuring", 00:18:15.835 "raid_level": "raid1", 00:18:15.835 "superblock": false, 00:18:15.835 "num_base_bdevs": 4, 00:18:15.835 "num_base_bdevs_discovered": 0, 00:18:15.835 "num_base_bdevs_operational": 4, 00:18:15.835 "base_bdevs_list": [ 00:18:15.835 { 00:18:15.835 "name": "BaseBdev1", 00:18:15.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.835 "is_configured": false, 00:18:15.835 "data_offset": 0, 00:18:15.835 "data_size": 0 00:18:15.835 }, 00:18:15.835 { 00:18:15.836 "name": "BaseBdev2", 00:18:15.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.836 "is_configured": false, 00:18:15.836 "data_offset": 0, 00:18:15.836 "data_size": 0 00:18:15.836 }, 00:18:15.836 { 00:18:15.836 "name": "BaseBdev3", 00:18:15.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.836 "is_configured": false, 00:18:15.836 "data_offset": 0, 00:18:15.836 "data_size": 0 00:18:15.836 }, 00:18:15.836 { 00:18:15.836 "name": "BaseBdev4", 00:18:15.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.836 "is_configured": false, 00:18:15.836 "data_offset": 0, 00:18:15.836 "data_size": 0 00:18:15.836 } 00:18:15.836 ] 00:18:15.836 }' 00:18:15.836 21:40:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.836 21:40:36 -- common/autotest_common.sh@10 -- # set +x 00:18:16.403 21:40:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:16.403 [2024-12-06 21:40:36.885060] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:16.403 [2024-12-06 21:40:36.885262] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:16.662 21:40:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:16.662 [2024-12-06 21:40:37.141168] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:16.662 [2024-12-06 21:40:37.141717] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:16.662 [2024-12-06 21:40:37.141746] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:16.662 [2024-12-06 21:40:37.141918] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:16.662 [2024-12-06 21:40:37.141935] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:16.662 [2024-12-06 21:40:37.142017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:16.662 [2024-12-06 21:40:37.142032] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:16.662 [2024-12-06 21:40:37.142122] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:16.921 21:40:37 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:16.921 [2024-12-06 21:40:37.372911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.921 BaseBdev1 00:18:16.921 21:40:37 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:16.921 21:40:37 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:16.921 21:40:37 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:16.921 21:40:37 -- common/autotest_common.sh@899 -- # local i 00:18:16.921 21:40:37 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:16.921 21:40:37 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:16.921 21:40:37 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:17.180 21:40:37 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:17.439 [ 00:18:17.439 { 00:18:17.439 "name": "BaseBdev1", 00:18:17.439 "aliases": [ 00:18:17.439 "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4" 00:18:17.439 ], 00:18:17.439 "product_name": "Malloc disk", 00:18:17.439 "block_size": 512, 00:18:17.439 "num_blocks": 65536, 00:18:17.439 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:17.439 "assigned_rate_limits": { 00:18:17.439 "rw_ios_per_sec": 0, 00:18:17.439 "rw_mbytes_per_sec": 0, 00:18:17.439 "r_mbytes_per_sec": 0, 00:18:17.439 "w_mbytes_per_sec": 0 00:18:17.439 }, 00:18:17.439 "claimed": true, 00:18:17.439 "claim_type": "exclusive_write", 00:18:17.439 "zoned": false, 00:18:17.439 "supported_io_types": { 00:18:17.439 "read": true, 00:18:17.439 "write": true, 00:18:17.439 "unmap": true, 00:18:17.439 "write_zeroes": true, 00:18:17.439 "flush": true, 00:18:17.439 "reset": true, 00:18:17.439 "compare": false, 00:18:17.439 "compare_and_write": false, 00:18:17.439 "abort": true, 00:18:17.439 "nvme_admin": false, 00:18:17.439 "nvme_io": false 00:18:17.439 }, 00:18:17.439 "memory_domains": [ 00:18:17.439 { 00:18:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:17.439 "dma_device_type": 2 00:18:17.439 } 00:18:17.439 ], 00:18:17.439 "driver_specific": {} 00:18:17.439 } 00:18:17.439 ] 00:18:17.439 21:40:37 -- common/autotest_common.sh@905 -- # return 0 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.439 21:40:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.698 21:40:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:17.698 "name": "Existed_Raid", 00:18:17.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.698 "strip_size_kb": 0, 00:18:17.698 "state": "configuring", 00:18:17.698 "raid_level": "raid1", 00:18:17.698 "superblock": false, 00:18:17.698 "num_base_bdevs": 4, 00:18:17.698 "num_base_bdevs_discovered": 1, 00:18:17.698 "num_base_bdevs_operational": 4, 00:18:17.698 "base_bdevs_list": [ 00:18:17.698 { 00:18:17.698 "name": "BaseBdev1", 00:18:17.698 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:17.698 "is_configured": true, 00:18:17.698 "data_offset": 0, 00:18:17.698 "data_size": 65536 00:18:17.698 }, 00:18:17.698 { 00:18:17.698 "name": "BaseBdev2", 00:18:17.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.698 "is_configured": false, 00:18:17.698 "data_offset": 0, 00:18:17.698 "data_size": 0 00:18:17.698 }, 00:18:17.698 { 00:18:17.698 "name": "BaseBdev3", 00:18:17.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.698 "is_configured": false, 00:18:17.698 "data_offset": 0, 00:18:17.698 "data_size": 0 00:18:17.698 }, 00:18:17.698 { 00:18:17.698 "name": "BaseBdev4", 00:18:17.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.698 "is_configured": false, 00:18:17.698 "data_offset": 0, 00:18:17.698 "data_size": 0 00:18:17.698 } 00:18:17.698 ] 00:18:17.698 }' 00:18:17.698 21:40:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:17.698 21:40:38 -- common/autotest_common.sh@10 -- # set +x 00:18:17.957 21:40:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:18.216 [2024-12-06 21:40:38.617229] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:18.216 [2024-12-06 21:40:38.617485] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:18.216 21:40:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:18.216 21:40:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:18.475 [2024-12-06 21:40:38.817325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:18.475 [2024-12-06 21:40:38.819659] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:18.475 [2024-12-06 21:40:38.820070] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:18.475 [2024-12-06 21:40:38.820090] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:18.475 [2024-12-06 21:40:38.820175] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:18.475 [2024-12-06 21:40:38.820189] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:18.475 [2024-12-06 21:40:38.820264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.475 21:40:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.735 21:40:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:18.735 "name": "Existed_Raid", 00:18:18.735 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.735 "strip_size_kb": 0, 00:18:18.735 "state": "configuring", 00:18:18.735 "raid_level": "raid1", 00:18:18.735 "superblock": false, 00:18:18.735 "num_base_bdevs": 4, 00:18:18.735 "num_base_bdevs_discovered": 1, 00:18:18.735 "num_base_bdevs_operational": 4, 00:18:18.735 "base_bdevs_list": [ 00:18:18.735 { 00:18:18.735 "name": "BaseBdev1", 00:18:18.735 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:18.735 "is_configured": true, 00:18:18.736 "data_offset": 0, 00:18:18.736 "data_size": 65536 00:18:18.736 }, 00:18:18.736 { 00:18:18.736 "name": "BaseBdev2", 00:18:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.736 "is_configured": false, 00:18:18.736 "data_offset": 0, 00:18:18.736 "data_size": 0 00:18:18.736 }, 00:18:18.736 { 00:18:18.736 "name": "BaseBdev3", 00:18:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.736 "is_configured": false, 00:18:18.736 "data_offset": 0, 00:18:18.736 "data_size": 0 00:18:18.736 }, 00:18:18.736 { 00:18:18.736 "name": "BaseBdev4", 00:18:18.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.736 "is_configured": false, 00:18:18.736 "data_offset": 0, 00:18:18.736 "data_size": 0 00:18:18.736 } 00:18:18.736 ] 00:18:18.736 }' 00:18:18.736 21:40:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:18.736 21:40:39 -- common/autotest_common.sh@10 -- # set +x 00:18:18.995 21:40:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:19.254 [2024-12-06 21:40:39.696638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:19.254 BaseBdev2 00:18:19.254 21:40:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:19.254 21:40:39 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:19.254 21:40:39 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:19.254 21:40:39 -- common/autotest_common.sh@899 -- # local i 00:18:19.254 21:40:39 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:19.254 21:40:39 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:19.254 21:40:39 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:19.512 21:40:39 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:19.770 [ 00:18:19.770 { 00:18:19.770 "name": "BaseBdev2", 00:18:19.770 "aliases": [ 00:18:19.770 "9cd29957-0a75-4cce-a535-135a5cffe366" 00:18:19.770 ], 00:18:19.770 "product_name": "Malloc disk", 00:18:19.770 "block_size": 512, 00:18:19.770 "num_blocks": 65536, 00:18:19.770 "uuid": "9cd29957-0a75-4cce-a535-135a5cffe366", 00:18:19.770 "assigned_rate_limits": { 00:18:19.770 "rw_ios_per_sec": 0, 00:18:19.770 "rw_mbytes_per_sec": 0, 00:18:19.770 "r_mbytes_per_sec": 0, 00:18:19.770 "w_mbytes_per_sec": 0 00:18:19.770 }, 00:18:19.770 "claimed": true, 00:18:19.770 "claim_type": "exclusive_write", 00:18:19.770 "zoned": false, 00:18:19.770 "supported_io_types": { 00:18:19.770 "read": true, 00:18:19.770 "write": true, 00:18:19.770 "unmap": true, 00:18:19.770 "write_zeroes": true, 00:18:19.770 "flush": true, 00:18:19.770 "reset": true, 00:18:19.770 "compare": false, 00:18:19.770 "compare_and_write": false, 00:18:19.770 "abort": true, 00:18:19.770 "nvme_admin": false, 00:18:19.770 "nvme_io": false 00:18:19.770 }, 00:18:19.770 "memory_domains": [ 00:18:19.770 { 00:18:19.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.770 "dma_device_type": 2 00:18:19.770 } 00:18:19.770 ], 00:18:19.770 "driver_specific": {} 00:18:19.770 } 00:18:19.770 ] 00:18:19.770 21:40:40 -- common/autotest_common.sh@905 -- # return 0 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:19.770 21:40:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.029 21:40:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.029 "name": "Existed_Raid", 00:18:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.029 "strip_size_kb": 0, 00:18:20.029 "state": "configuring", 00:18:20.029 "raid_level": "raid1", 00:18:20.029 "superblock": false, 00:18:20.029 "num_base_bdevs": 4, 00:18:20.029 "num_base_bdevs_discovered": 2, 00:18:20.029 "num_base_bdevs_operational": 4, 00:18:20.029 "base_bdevs_list": [ 00:18:20.029 { 00:18:20.029 "name": "BaseBdev1", 00:18:20.029 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:20.029 "is_configured": true, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 65536 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "name": "BaseBdev2", 00:18:20.029 "uuid": "9cd29957-0a75-4cce-a535-135a5cffe366", 00:18:20.029 "is_configured": true, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 65536 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "name": "BaseBdev3", 00:18:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.029 "is_configured": false, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 0 00:18:20.029 }, 00:18:20.029 { 00:18:20.029 "name": "BaseBdev4", 00:18:20.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.029 "is_configured": false, 00:18:20.029 "data_offset": 0, 00:18:20.029 "data_size": 0 00:18:20.029 } 00:18:20.029 ] 00:18:20.029 }' 00:18:20.029 21:40:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.029 21:40:40 -- common/autotest_common.sh@10 -- # set +x 00:18:20.287 21:40:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:20.546 [2024-12-06 21:40:40.976733] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:20.546 BaseBdev3 00:18:20.546 21:40:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:20.546 21:40:40 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:20.546 21:40:40 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:20.546 21:40:40 -- common/autotest_common.sh@899 -- # local i 00:18:20.546 21:40:40 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:20.546 21:40:40 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:20.546 21:40:40 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:20.805 21:40:41 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:21.064 [ 00:18:21.064 { 00:18:21.064 "name": "BaseBdev3", 00:18:21.064 "aliases": [ 00:18:21.064 "b481ae6d-87ff-4e4f-9d2d-56e067b8bd8d" 00:18:21.064 ], 00:18:21.064 "product_name": "Malloc disk", 00:18:21.064 "block_size": 512, 00:18:21.064 "num_blocks": 65536, 00:18:21.064 "uuid": "b481ae6d-87ff-4e4f-9d2d-56e067b8bd8d", 00:18:21.064 "assigned_rate_limits": { 00:18:21.064 "rw_ios_per_sec": 0, 00:18:21.064 "rw_mbytes_per_sec": 0, 00:18:21.064 "r_mbytes_per_sec": 0, 00:18:21.064 "w_mbytes_per_sec": 0 00:18:21.064 }, 00:18:21.064 "claimed": true, 00:18:21.064 "claim_type": "exclusive_write", 00:18:21.064 "zoned": false, 00:18:21.064 "supported_io_types": { 00:18:21.064 "read": true, 00:18:21.064 "write": true, 00:18:21.064 "unmap": true, 00:18:21.064 "write_zeroes": true, 00:18:21.064 "flush": true, 00:18:21.064 "reset": true, 00:18:21.064 "compare": false, 00:18:21.064 "compare_and_write": false, 00:18:21.064 "abort": true, 00:18:21.064 "nvme_admin": false, 00:18:21.064 "nvme_io": false 00:18:21.064 }, 00:18:21.064 "memory_domains": [ 00:18:21.064 { 00:18:21.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.064 "dma_device_type": 2 00:18:21.064 } 00:18:21.064 ], 00:18:21.064 "driver_specific": {} 00:18:21.064 } 00:18:21.064 ] 00:18:21.064 21:40:41 -- common/autotest_common.sh@905 -- # return 0 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:21.064 21:40:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:21.065 21:40:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:21.065 21:40:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:21.065 21:40:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.065 21:40:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:21.324 21:40:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:21.324 "name": "Existed_Raid", 00:18:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.324 "strip_size_kb": 0, 00:18:21.324 "state": "configuring", 00:18:21.324 "raid_level": "raid1", 00:18:21.324 "superblock": false, 00:18:21.324 "num_base_bdevs": 4, 00:18:21.324 "num_base_bdevs_discovered": 3, 00:18:21.324 "num_base_bdevs_operational": 4, 00:18:21.324 "base_bdevs_list": [ 00:18:21.324 { 00:18:21.324 "name": "BaseBdev1", 00:18:21.324 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:21.324 "is_configured": true, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 65536 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "name": "BaseBdev2", 00:18:21.324 "uuid": "9cd29957-0a75-4cce-a535-135a5cffe366", 00:18:21.324 "is_configured": true, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 65536 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "name": "BaseBdev3", 00:18:21.324 "uuid": "b481ae6d-87ff-4e4f-9d2d-56e067b8bd8d", 00:18:21.324 "is_configured": true, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 65536 00:18:21.324 }, 00:18:21.324 { 00:18:21.324 "name": "BaseBdev4", 00:18:21.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.324 "is_configured": false, 00:18:21.324 "data_offset": 0, 00:18:21.324 "data_size": 0 00:18:21.324 } 00:18:21.324 ] 00:18:21.324 }' 00:18:21.324 21:40:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:21.324 21:40:41 -- common/autotest_common.sh@10 -- # set +x 00:18:21.584 21:40:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:21.844 [2024-12-06 21:40:42.329289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:21.844 [2024-12-06 21:40:42.329613] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:18:21.844 [2024-12-06 21:40:42.329669] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:21.844 [2024-12-06 21:40:42.329935] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:21.844 [2024-12-06 21:40:42.330431] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:18:21.844 [2024-12-06 21:40:42.330493] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:18:21.844 [2024-12-06 21:40:42.331026] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:21.844 BaseBdev4 00:18:22.104 21:40:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:22.104 21:40:42 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:22.104 21:40:42 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:22.104 21:40:42 -- common/autotest_common.sh@899 -- # local i 00:18:22.104 21:40:42 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:22.104 21:40:42 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:22.104 21:40:42 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.104 21:40:42 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:22.364 [ 00:18:22.364 { 00:18:22.364 "name": "BaseBdev4", 00:18:22.364 "aliases": [ 00:18:22.364 "e8a66d5f-13df-4de8-91ad-d71752373d69" 00:18:22.364 ], 00:18:22.364 "product_name": "Malloc disk", 00:18:22.364 "block_size": 512, 00:18:22.364 "num_blocks": 65536, 00:18:22.364 "uuid": "e8a66d5f-13df-4de8-91ad-d71752373d69", 00:18:22.364 "assigned_rate_limits": { 00:18:22.364 "rw_ios_per_sec": 0, 00:18:22.364 "rw_mbytes_per_sec": 0, 00:18:22.364 "r_mbytes_per_sec": 0, 00:18:22.364 "w_mbytes_per_sec": 0 00:18:22.364 }, 00:18:22.364 "claimed": true, 00:18:22.364 "claim_type": "exclusive_write", 00:18:22.364 "zoned": false, 00:18:22.364 "supported_io_types": { 00:18:22.364 "read": true, 00:18:22.364 "write": true, 00:18:22.364 "unmap": true, 00:18:22.364 "write_zeroes": true, 00:18:22.364 "flush": true, 00:18:22.364 "reset": true, 00:18:22.364 "compare": false, 00:18:22.364 "compare_and_write": false, 00:18:22.364 "abort": true, 00:18:22.364 "nvme_admin": false, 00:18:22.364 "nvme_io": false 00:18:22.364 }, 00:18:22.364 "memory_domains": [ 00:18:22.364 { 00:18:22.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.364 "dma_device_type": 2 00:18:22.364 } 00:18:22.364 ], 00:18:22.364 "driver_specific": {} 00:18:22.364 } 00:18:22.364 ] 00:18:22.364 21:40:42 -- common/autotest_common.sh@905 -- # return 0 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.364 21:40:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.630 21:40:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.630 "name": "Existed_Raid", 00:18:22.630 "uuid": "639eaa11-dd0b-4e6d-8799-8908f168755f", 00:18:22.630 "strip_size_kb": 0, 00:18:22.630 "state": "online", 00:18:22.630 "raid_level": "raid1", 00:18:22.630 "superblock": false, 00:18:22.630 "num_base_bdevs": 4, 00:18:22.630 "num_base_bdevs_discovered": 4, 00:18:22.630 "num_base_bdevs_operational": 4, 00:18:22.630 "base_bdevs_list": [ 00:18:22.630 { 00:18:22.630 "name": "BaseBdev1", 00:18:22.630 "uuid": "2b8554e8-614e-4d81-b28b-3d96f3c6f0f4", 00:18:22.630 "is_configured": true, 00:18:22.630 "data_offset": 0, 00:18:22.630 "data_size": 65536 00:18:22.630 }, 00:18:22.630 { 00:18:22.630 "name": "BaseBdev2", 00:18:22.630 "uuid": "9cd29957-0a75-4cce-a535-135a5cffe366", 00:18:22.630 "is_configured": true, 00:18:22.630 "data_offset": 0, 00:18:22.630 "data_size": 65536 00:18:22.630 }, 00:18:22.630 { 00:18:22.630 "name": "BaseBdev3", 00:18:22.630 "uuid": "b481ae6d-87ff-4e4f-9d2d-56e067b8bd8d", 00:18:22.630 "is_configured": true, 00:18:22.630 "data_offset": 0, 00:18:22.630 "data_size": 65536 00:18:22.630 }, 00:18:22.630 { 00:18:22.631 "name": "BaseBdev4", 00:18:22.631 "uuid": "e8a66d5f-13df-4de8-91ad-d71752373d69", 00:18:22.631 "is_configured": true, 00:18:22.631 "data_offset": 0, 00:18:22.631 "data_size": 65536 00:18:22.631 } 00:18:22.631 ] 00:18:22.631 }' 00:18:22.631 21:40:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.631 21:40:43 -- common/autotest_common.sh@10 -- # set +x 00:18:22.894 21:40:43 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.152 [2024-12-06 21:40:43.461782] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:23.152 21:40:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:23.153 21:40:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:23.153 21:40:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:23.153 21:40:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:23.153 21:40:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.153 21:40:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.412 21:40:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:23.412 "name": "Existed_Raid", 00:18:23.412 "uuid": "639eaa11-dd0b-4e6d-8799-8908f168755f", 00:18:23.412 "strip_size_kb": 0, 00:18:23.412 "state": "online", 00:18:23.412 "raid_level": "raid1", 00:18:23.412 "superblock": false, 00:18:23.412 "num_base_bdevs": 4, 00:18:23.412 "num_base_bdevs_discovered": 3, 00:18:23.412 "num_base_bdevs_operational": 3, 00:18:23.412 "base_bdevs_list": [ 00:18:23.412 { 00:18:23.412 "name": null, 00:18:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.412 "is_configured": false, 00:18:23.412 "data_offset": 0, 00:18:23.412 "data_size": 65536 00:18:23.412 }, 00:18:23.412 { 00:18:23.412 "name": "BaseBdev2", 00:18:23.412 "uuid": "9cd29957-0a75-4cce-a535-135a5cffe366", 00:18:23.412 "is_configured": true, 00:18:23.412 "data_offset": 0, 00:18:23.412 "data_size": 65536 00:18:23.412 }, 00:18:23.412 { 00:18:23.412 "name": "BaseBdev3", 00:18:23.412 "uuid": "b481ae6d-87ff-4e4f-9d2d-56e067b8bd8d", 00:18:23.412 "is_configured": true, 00:18:23.412 "data_offset": 0, 00:18:23.412 "data_size": 65536 00:18:23.412 }, 00:18:23.412 { 00:18:23.412 "name": "BaseBdev4", 00:18:23.412 "uuid": "e8a66d5f-13df-4de8-91ad-d71752373d69", 00:18:23.412 "is_configured": true, 00:18:23.412 "data_offset": 0, 00:18:23.412 "data_size": 65536 00:18:23.412 } 00:18:23.412 ] 00:18:23.412 }' 00:18:23.412 21:40:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:23.412 21:40:43 -- common/autotest_common.sh@10 -- # set +x 00:18:23.671 21:40:44 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:23.671 21:40:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:23.671 21:40:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.671 21:40:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:23.930 21:40:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:23.930 21:40:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.930 21:40:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:24.189 [2024-12-06 21:40:44.539762] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:24.189 21:40:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:24.189 21:40:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:24.189 21:40:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.189 21:40:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:24.448 21:40:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:24.448 21:40:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:24.448 21:40:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:24.708 [2024-12-06 21:40:45.067218] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:24.708 21:40:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:24.708 21:40:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:24.708 21:40:45 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.708 21:40:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:24.966 21:40:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:24.966 21:40:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:24.966 21:40:45 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:25.225 [2024-12-06 21:40:45.540857] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:25.225 [2024-12-06 21:40:45.541072] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:25.225 [2024-12-06 21:40:45.541143] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:25.225 [2024-12-06 21:40:45.612468] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:25.225 [2024-12-06 21:40:45.612518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:18:25.225 21:40:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:25.225 21:40:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:25.225 21:40:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.225 21:40:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:25.483 21:40:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:25.483 21:40:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:25.483 21:40:45 -- bdev/bdev_raid.sh@287 -- # killprocess 76581 00:18:25.483 21:40:45 -- common/autotest_common.sh@936 -- # '[' -z 76581 ']' 00:18:25.483 21:40:45 -- common/autotest_common.sh@940 -- # kill -0 76581 00:18:25.483 21:40:45 -- common/autotest_common.sh@941 -- # uname 00:18:25.483 21:40:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.483 21:40:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76581 00:18:25.483 killing process with pid 76581 00:18:25.483 21:40:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:25.484 21:40:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:25.484 21:40:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76581' 00:18:25.484 21:40:45 -- common/autotest_common.sh@955 -- # kill 76581 00:18:25.484 [2024-12-06 21:40:45.871658] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:25.484 21:40:45 -- common/autotest_common.sh@960 -- # wait 76581 00:18:25.484 [2024-12-06 21:40:45.871786] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:26.446 21:40:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:26.446 00:18:26.446 real 0m12.025s 00:18:26.446 user 0m20.184s 00:18:26.446 sys 0m1.784s 00:18:26.446 21:40:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:26.446 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:18:26.446 ************************************ 00:18:26.446 END TEST raid_state_function_test 00:18:26.446 ************************************ 00:18:26.740 21:40:46 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:18:26.740 21:40:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:18:26.740 21:40:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.740 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:18:26.740 ************************************ 00:18:26.740 START TEST raid_state_function_test_sb 00:18:26.740 ************************************ 00:18:26.740 21:40:46 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid1 4 true 00:18:26.740 21:40:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:26.740 21:40:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=76975 00:18:26.741 Process raid pid: 76975 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 76975' 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 76975 /var/tmp/spdk-raid.sock 00:18:26.741 21:40:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:26.741 21:40:46 -- common/autotest_common.sh@829 -- # '[' -z 76975 ']' 00:18:26.741 21:40:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:26.741 21:40:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.741 21:40:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:26.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:26.741 21:40:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.741 21:40:46 -- common/autotest_common.sh@10 -- # set +x 00:18:26.741 [2024-12-06 21:40:47.034854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:26.741 [2024-12-06 21:40:47.035229] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.741 [2024-12-06 21:40:47.201223] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.000 [2024-12-06 21:40:47.375069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.259 [2024-12-06 21:40:47.542292] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:27.518 21:40:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.518 21:40:47 -- common/autotest_common.sh@862 -- # return 0 00:18:27.518 21:40:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:27.777 [2024-12-06 21:40:48.067644] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:27.777 [2024-12-06 21:40:48.067711] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:27.777 [2024-12-06 21:40:48.067740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:27.777 [2024-12-06 21:40:48.067754] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:27.777 [2024-12-06 21:40:48.067765] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:27.777 [2024-12-06 21:40:48.067777] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:27.777 [2024-12-06 21:40:48.067785] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:27.777 [2024-12-06 21:40:48.067797] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.777 21:40:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.778 21:40:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.778 21:40:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.037 21:40:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:28.037 "name": "Existed_Raid", 00:18:28.037 "uuid": "f4c2786b-db32-484f-a8ff-30dd7535f1ba", 00:18:28.037 "strip_size_kb": 0, 00:18:28.037 "state": "configuring", 00:18:28.037 "raid_level": "raid1", 00:18:28.037 "superblock": true, 00:18:28.037 "num_base_bdevs": 4, 00:18:28.037 "num_base_bdevs_discovered": 0, 00:18:28.037 "num_base_bdevs_operational": 4, 00:18:28.037 "base_bdevs_list": [ 00:18:28.037 { 00:18:28.037 "name": "BaseBdev1", 00:18:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.037 "is_configured": false, 00:18:28.037 "data_offset": 0, 00:18:28.037 "data_size": 0 00:18:28.037 }, 00:18:28.037 { 00:18:28.037 "name": "BaseBdev2", 00:18:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.037 "is_configured": false, 00:18:28.037 "data_offset": 0, 00:18:28.037 "data_size": 0 00:18:28.037 }, 00:18:28.037 { 00:18:28.037 "name": "BaseBdev3", 00:18:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.037 "is_configured": false, 00:18:28.037 "data_offset": 0, 00:18:28.037 "data_size": 0 00:18:28.037 }, 00:18:28.037 { 00:18:28.037 "name": "BaseBdev4", 00:18:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.037 "is_configured": false, 00:18:28.037 "data_offset": 0, 00:18:28.037 "data_size": 0 00:18:28.037 } 00:18:28.037 ] 00:18:28.037 }' 00:18:28.037 21:40:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:28.037 21:40:48 -- common/autotest_common.sh@10 -- # set +x 00:18:28.296 21:40:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:28.556 [2024-12-06 21:40:48.795711] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:28.556 [2024-12-06 21:40:48.795754] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:18:28.556 21:40:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:28.556 [2024-12-06 21:40:49.003847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.556 [2024-12-06 21:40:49.003932] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.556 [2024-12-06 21:40:49.003944] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:28.556 [2024-12-06 21:40:49.003957] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:28.556 [2024-12-06 21:40:49.003966] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:28.556 [2024-12-06 21:40:49.003977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:28.556 [2024-12-06 21:40:49.003985] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:28.556 [2024-12-06 21:40:49.003997] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:28.556 21:40:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:28.816 [2024-12-06 21:40:49.283080] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:28.816 BaseBdev1 00:18:28.816 21:40:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:28.816 21:40:49 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:28.816 21:40:49 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:28.816 21:40:49 -- common/autotest_common.sh@899 -- # local i 00:18:28.816 21:40:49 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:28.816 21:40:49 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:28.816 21:40:49 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.075 21:40:49 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.334 [ 00:18:29.334 { 00:18:29.334 "name": "BaseBdev1", 00:18:29.334 "aliases": [ 00:18:29.334 "8a6ac6d3-7181-4b3d-abec-3a533bced0e0" 00:18:29.334 ], 00:18:29.334 "product_name": "Malloc disk", 00:18:29.334 "block_size": 512, 00:18:29.334 "num_blocks": 65536, 00:18:29.334 "uuid": "8a6ac6d3-7181-4b3d-abec-3a533bced0e0", 00:18:29.334 "assigned_rate_limits": { 00:18:29.334 "rw_ios_per_sec": 0, 00:18:29.334 "rw_mbytes_per_sec": 0, 00:18:29.334 "r_mbytes_per_sec": 0, 00:18:29.334 "w_mbytes_per_sec": 0 00:18:29.334 }, 00:18:29.334 "claimed": true, 00:18:29.334 "claim_type": "exclusive_write", 00:18:29.334 "zoned": false, 00:18:29.334 "supported_io_types": { 00:18:29.334 "read": true, 00:18:29.334 "write": true, 00:18:29.334 "unmap": true, 00:18:29.334 "write_zeroes": true, 00:18:29.334 "flush": true, 00:18:29.334 "reset": true, 00:18:29.334 "compare": false, 00:18:29.334 "compare_and_write": false, 00:18:29.334 "abort": true, 00:18:29.334 "nvme_admin": false, 00:18:29.334 "nvme_io": false 00:18:29.334 }, 00:18:29.334 "memory_domains": [ 00:18:29.334 { 00:18:29.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.334 "dma_device_type": 2 00:18:29.334 } 00:18:29.334 ], 00:18:29.334 "driver_specific": {} 00:18:29.334 } 00:18:29.334 ] 00:18:29.334 21:40:49 -- common/autotest_common.sh@905 -- # return 0 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:29.334 21:40:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.335 21:40:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.593 21:40:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.593 "name": "Existed_Raid", 00:18:29.593 "uuid": "0d8442a4-ccc7-4fe6-972e-68a1b389cd4d", 00:18:29.593 "strip_size_kb": 0, 00:18:29.593 "state": "configuring", 00:18:29.593 "raid_level": "raid1", 00:18:29.593 "superblock": true, 00:18:29.593 "num_base_bdevs": 4, 00:18:29.593 "num_base_bdevs_discovered": 1, 00:18:29.593 "num_base_bdevs_operational": 4, 00:18:29.593 "base_bdevs_list": [ 00:18:29.593 { 00:18:29.593 "name": "BaseBdev1", 00:18:29.593 "uuid": "8a6ac6d3-7181-4b3d-abec-3a533bced0e0", 00:18:29.593 "is_configured": true, 00:18:29.593 "data_offset": 2048, 00:18:29.593 "data_size": 63488 00:18:29.593 }, 00:18:29.593 { 00:18:29.593 "name": "BaseBdev2", 00:18:29.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.593 "is_configured": false, 00:18:29.593 "data_offset": 0, 00:18:29.593 "data_size": 0 00:18:29.593 }, 00:18:29.593 { 00:18:29.593 "name": "BaseBdev3", 00:18:29.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.593 "is_configured": false, 00:18:29.593 "data_offset": 0, 00:18:29.593 "data_size": 0 00:18:29.593 }, 00:18:29.593 { 00:18:29.593 "name": "BaseBdev4", 00:18:29.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.593 "is_configured": false, 00:18:29.593 "data_offset": 0, 00:18:29.593 "data_size": 0 00:18:29.593 } 00:18:29.593 ] 00:18:29.593 }' 00:18:29.593 21:40:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.593 21:40:49 -- common/autotest_common.sh@10 -- # set +x 00:18:29.852 21:40:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:30.110 [2024-12-06 21:40:50.439671] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:30.110 [2024-12-06 21:40:50.439736] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:18:30.110 21:40:50 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:30.110 21:40:50 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:30.369 21:40:50 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:30.628 BaseBdev1 00:18:30.628 21:40:50 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:30.628 21:40:50 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:18:30.628 21:40:50 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:30.628 21:40:50 -- common/autotest_common.sh@899 -- # local i 00:18:30.628 21:40:50 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:30.628 21:40:50 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:30.628 21:40:50 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:30.886 21:40:51 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:30.887 [ 00:18:30.887 { 00:18:30.887 "name": "BaseBdev1", 00:18:30.887 "aliases": [ 00:18:30.887 "2c2981db-42e7-4228-882c-65e75dd5d1cc" 00:18:30.887 ], 00:18:30.887 "product_name": "Malloc disk", 00:18:30.887 "block_size": 512, 00:18:30.887 "num_blocks": 65536, 00:18:30.887 "uuid": "2c2981db-42e7-4228-882c-65e75dd5d1cc", 00:18:30.887 "assigned_rate_limits": { 00:18:30.887 "rw_ios_per_sec": 0, 00:18:30.887 "rw_mbytes_per_sec": 0, 00:18:30.887 "r_mbytes_per_sec": 0, 00:18:30.887 "w_mbytes_per_sec": 0 00:18:30.887 }, 00:18:30.887 "claimed": false, 00:18:30.887 "zoned": false, 00:18:30.887 "supported_io_types": { 00:18:30.887 "read": true, 00:18:30.887 "write": true, 00:18:30.887 "unmap": true, 00:18:30.887 "write_zeroes": true, 00:18:30.887 "flush": true, 00:18:30.887 "reset": true, 00:18:30.887 "compare": false, 00:18:30.887 "compare_and_write": false, 00:18:30.887 "abort": true, 00:18:30.887 "nvme_admin": false, 00:18:30.887 "nvme_io": false 00:18:30.887 }, 00:18:30.887 "memory_domains": [ 00:18:30.887 { 00:18:30.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:30.887 "dma_device_type": 2 00:18:30.887 } 00:18:30.887 ], 00:18:30.887 "driver_specific": {} 00:18:30.887 } 00:18:30.887 ] 00:18:30.887 21:40:51 -- common/autotest_common.sh@905 -- # return 0 00:18:30.887 21:40:51 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:31.146 [2024-12-06 21:40:51.535529] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:31.146 [2024-12-06 21:40:51.537494] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:31.146 [2024-12-06 21:40:51.537569] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:31.146 [2024-12-06 21:40:51.537583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:31.146 [2024-12-06 21:40:51.537597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:31.146 [2024-12-06 21:40:51.537606] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:31.146 [2024-12-06 21:40:51.537620] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.146 21:40:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.406 21:40:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:31.406 "name": "Existed_Raid", 00:18:31.406 "uuid": "2ffb73e6-c423-4e9d-a78c-2c77ed9574a3", 00:18:31.406 "strip_size_kb": 0, 00:18:31.406 "state": "configuring", 00:18:31.406 "raid_level": "raid1", 00:18:31.406 "superblock": true, 00:18:31.406 "num_base_bdevs": 4, 00:18:31.406 "num_base_bdevs_discovered": 1, 00:18:31.406 "num_base_bdevs_operational": 4, 00:18:31.406 "base_bdevs_list": [ 00:18:31.406 { 00:18:31.406 "name": "BaseBdev1", 00:18:31.406 "uuid": "2c2981db-42e7-4228-882c-65e75dd5d1cc", 00:18:31.406 "is_configured": true, 00:18:31.406 "data_offset": 2048, 00:18:31.406 "data_size": 63488 00:18:31.406 }, 00:18:31.406 { 00:18:31.406 "name": "BaseBdev2", 00:18:31.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.406 "is_configured": false, 00:18:31.406 "data_offset": 0, 00:18:31.406 "data_size": 0 00:18:31.406 }, 00:18:31.406 { 00:18:31.406 "name": "BaseBdev3", 00:18:31.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.406 "is_configured": false, 00:18:31.406 "data_offset": 0, 00:18:31.406 "data_size": 0 00:18:31.406 }, 00:18:31.406 { 00:18:31.406 "name": "BaseBdev4", 00:18:31.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:31.406 "is_configured": false, 00:18:31.406 "data_offset": 0, 00:18:31.406 "data_size": 0 00:18:31.406 } 00:18:31.406 ] 00:18:31.406 }' 00:18:31.406 21:40:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:31.406 21:40:51 -- common/autotest_common.sh@10 -- # set +x 00:18:31.664 21:40:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:31.921 [2024-12-06 21:40:52.344226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.921 BaseBdev2 00:18:31.921 21:40:52 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:31.921 21:40:52 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:18:31.921 21:40:52 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:31.921 21:40:52 -- common/autotest_common.sh@899 -- # local i 00:18:31.921 21:40:52 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:31.921 21:40:52 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:31.921 21:40:52 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:32.179 21:40:52 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:32.437 [ 00:18:32.437 { 00:18:32.437 "name": "BaseBdev2", 00:18:32.437 "aliases": [ 00:18:32.437 "9e186cdf-34b6-434c-a733-719cf62eca01" 00:18:32.437 ], 00:18:32.437 "product_name": "Malloc disk", 00:18:32.437 "block_size": 512, 00:18:32.437 "num_blocks": 65536, 00:18:32.437 "uuid": "9e186cdf-34b6-434c-a733-719cf62eca01", 00:18:32.437 "assigned_rate_limits": { 00:18:32.437 "rw_ios_per_sec": 0, 00:18:32.437 "rw_mbytes_per_sec": 0, 00:18:32.437 "r_mbytes_per_sec": 0, 00:18:32.437 "w_mbytes_per_sec": 0 00:18:32.437 }, 00:18:32.437 "claimed": true, 00:18:32.437 "claim_type": "exclusive_write", 00:18:32.437 "zoned": false, 00:18:32.437 "supported_io_types": { 00:18:32.437 "read": true, 00:18:32.437 "write": true, 00:18:32.437 "unmap": true, 00:18:32.437 "write_zeroes": true, 00:18:32.437 "flush": true, 00:18:32.437 "reset": true, 00:18:32.437 "compare": false, 00:18:32.437 "compare_and_write": false, 00:18:32.437 "abort": true, 00:18:32.437 "nvme_admin": false, 00:18:32.437 "nvme_io": false 00:18:32.437 }, 00:18:32.437 "memory_domains": [ 00:18:32.437 { 00:18:32.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.437 "dma_device_type": 2 00:18:32.437 } 00:18:32.437 ], 00:18:32.437 "driver_specific": {} 00:18:32.437 } 00:18:32.437 ] 00:18:32.437 21:40:52 -- common/autotest_common.sh@905 -- # return 0 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.437 21:40:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.695 21:40:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:32.695 "name": "Existed_Raid", 00:18:32.695 "uuid": "2ffb73e6-c423-4e9d-a78c-2c77ed9574a3", 00:18:32.695 "strip_size_kb": 0, 00:18:32.695 "state": "configuring", 00:18:32.695 "raid_level": "raid1", 00:18:32.695 "superblock": true, 00:18:32.695 "num_base_bdevs": 4, 00:18:32.695 "num_base_bdevs_discovered": 2, 00:18:32.696 "num_base_bdevs_operational": 4, 00:18:32.696 "base_bdevs_list": [ 00:18:32.696 { 00:18:32.696 "name": "BaseBdev1", 00:18:32.696 "uuid": "2c2981db-42e7-4228-882c-65e75dd5d1cc", 00:18:32.696 "is_configured": true, 00:18:32.696 "data_offset": 2048, 00:18:32.696 "data_size": 63488 00:18:32.696 }, 00:18:32.696 { 00:18:32.696 "name": "BaseBdev2", 00:18:32.696 "uuid": "9e186cdf-34b6-434c-a733-719cf62eca01", 00:18:32.696 "is_configured": true, 00:18:32.696 "data_offset": 2048, 00:18:32.696 "data_size": 63488 00:18:32.696 }, 00:18:32.696 { 00:18:32.696 "name": "BaseBdev3", 00:18:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.696 "is_configured": false, 00:18:32.696 "data_offset": 0, 00:18:32.696 "data_size": 0 00:18:32.696 }, 00:18:32.696 { 00:18:32.696 "name": "BaseBdev4", 00:18:32.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.696 "is_configured": false, 00:18:32.696 "data_offset": 0, 00:18:32.696 "data_size": 0 00:18:32.696 } 00:18:32.696 ] 00:18:32.696 }' 00:18:32.696 21:40:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:32.696 21:40:52 -- common/autotest_common.sh@10 -- # set +x 00:18:32.953 21:40:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:33.211 [2024-12-06 21:40:53.531398] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:33.211 BaseBdev3 00:18:33.211 21:40:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:33.211 21:40:53 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:18:33.211 21:40:53 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:33.211 21:40:53 -- common/autotest_common.sh@899 -- # local i 00:18:33.211 21:40:53 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:33.211 21:40:53 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:33.211 21:40:53 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:33.470 21:40:53 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:33.470 [ 00:18:33.470 { 00:18:33.470 "name": "BaseBdev3", 00:18:33.470 "aliases": [ 00:18:33.470 "cca1a510-f73f-42b1-9819-641ad7be2d66" 00:18:33.470 ], 00:18:33.470 "product_name": "Malloc disk", 00:18:33.470 "block_size": 512, 00:18:33.470 "num_blocks": 65536, 00:18:33.470 "uuid": "cca1a510-f73f-42b1-9819-641ad7be2d66", 00:18:33.470 "assigned_rate_limits": { 00:18:33.470 "rw_ios_per_sec": 0, 00:18:33.470 "rw_mbytes_per_sec": 0, 00:18:33.470 "r_mbytes_per_sec": 0, 00:18:33.470 "w_mbytes_per_sec": 0 00:18:33.470 }, 00:18:33.470 "claimed": true, 00:18:33.470 "claim_type": "exclusive_write", 00:18:33.470 "zoned": false, 00:18:33.470 "supported_io_types": { 00:18:33.470 "read": true, 00:18:33.470 "write": true, 00:18:33.470 "unmap": true, 00:18:33.470 "write_zeroes": true, 00:18:33.470 "flush": true, 00:18:33.470 "reset": true, 00:18:33.470 "compare": false, 00:18:33.470 "compare_and_write": false, 00:18:33.470 "abort": true, 00:18:33.470 "nvme_admin": false, 00:18:33.470 "nvme_io": false 00:18:33.470 }, 00:18:33.470 "memory_domains": [ 00:18:33.470 { 00:18:33.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:33.470 "dma_device_type": 2 00:18:33.470 } 00:18:33.470 ], 00:18:33.470 "driver_specific": {} 00:18:33.470 } 00:18:33.470 ] 00:18:33.470 21:40:53 -- common/autotest_common.sh@905 -- # return 0 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:33.470 21:40:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:33.728 21:40:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:33.728 "name": "Existed_Raid", 00:18:33.728 "uuid": "2ffb73e6-c423-4e9d-a78c-2c77ed9574a3", 00:18:33.728 "strip_size_kb": 0, 00:18:33.728 "state": "configuring", 00:18:33.728 "raid_level": "raid1", 00:18:33.728 "superblock": true, 00:18:33.728 "num_base_bdevs": 4, 00:18:33.728 "num_base_bdevs_discovered": 3, 00:18:33.728 "num_base_bdevs_operational": 4, 00:18:33.728 "base_bdevs_list": [ 00:18:33.728 { 00:18:33.728 "name": "BaseBdev1", 00:18:33.728 "uuid": "2c2981db-42e7-4228-882c-65e75dd5d1cc", 00:18:33.728 "is_configured": true, 00:18:33.728 "data_offset": 2048, 00:18:33.728 "data_size": 63488 00:18:33.728 }, 00:18:33.728 { 00:18:33.728 "name": "BaseBdev2", 00:18:33.728 "uuid": "9e186cdf-34b6-434c-a733-719cf62eca01", 00:18:33.728 "is_configured": true, 00:18:33.728 "data_offset": 2048, 00:18:33.728 "data_size": 63488 00:18:33.728 }, 00:18:33.728 { 00:18:33.728 "name": "BaseBdev3", 00:18:33.728 "uuid": "cca1a510-f73f-42b1-9819-641ad7be2d66", 00:18:33.728 "is_configured": true, 00:18:33.728 "data_offset": 2048, 00:18:33.728 "data_size": 63488 00:18:33.728 }, 00:18:33.728 { 00:18:33.728 "name": "BaseBdev4", 00:18:33.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.728 "is_configured": false, 00:18:33.728 "data_offset": 0, 00:18:33.728 "data_size": 0 00:18:33.728 } 00:18:33.728 ] 00:18:33.728 }' 00:18:33.728 21:40:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:33.728 21:40:54 -- common/autotest_common.sh@10 -- # set +x 00:18:34.293 21:40:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:34.551 [2024-12-06 21:40:54.808015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:34.551 BaseBdev4 00:18:34.551 [2024-12-06 21:40:54.808514] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:18:34.551 [2024-12-06 21:40:54.808543] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:34.551 [2024-12-06 21:40:54.808687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:34.551 [2024-12-06 21:40:54.809058] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:18:34.551 [2024-12-06 21:40:54.809079] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:18:34.551 [2024-12-06 21:40:54.809227] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.551 21:40:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:34.551 21:40:54 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:18:34.551 21:40:54 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:34.551 21:40:54 -- common/autotest_common.sh@899 -- # local i 00:18:34.551 21:40:54 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:34.551 21:40:54 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:34.551 21:40:54 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:34.819 21:40:55 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:34.819 [ 00:18:34.819 { 00:18:34.819 "name": "BaseBdev4", 00:18:34.819 "aliases": [ 00:18:34.819 "08293a55-5cf9-4c1f-921a-090589426725" 00:18:34.819 ], 00:18:34.819 "product_name": "Malloc disk", 00:18:34.819 "block_size": 512, 00:18:34.819 "num_blocks": 65536, 00:18:34.819 "uuid": "08293a55-5cf9-4c1f-921a-090589426725", 00:18:34.819 "assigned_rate_limits": { 00:18:34.819 "rw_ios_per_sec": 0, 00:18:34.819 "rw_mbytes_per_sec": 0, 00:18:34.819 "r_mbytes_per_sec": 0, 00:18:34.819 "w_mbytes_per_sec": 0 00:18:34.819 }, 00:18:34.819 "claimed": true, 00:18:34.819 "claim_type": "exclusive_write", 00:18:34.819 "zoned": false, 00:18:34.819 "supported_io_types": { 00:18:34.819 "read": true, 00:18:34.819 "write": true, 00:18:34.819 "unmap": true, 00:18:34.819 "write_zeroes": true, 00:18:34.819 "flush": true, 00:18:34.819 "reset": true, 00:18:34.819 "compare": false, 00:18:34.819 "compare_and_write": false, 00:18:34.819 "abort": true, 00:18:34.819 "nvme_admin": false, 00:18:34.819 "nvme_io": false 00:18:34.819 }, 00:18:34.819 "memory_domains": [ 00:18:34.819 { 00:18:34.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:34.819 "dma_device_type": 2 00:18:34.819 } 00:18:34.819 ], 00:18:34.819 "driver_specific": {} 00:18:34.819 } 00:18:34.819 ] 00:18:34.819 21:40:55 -- common/autotest_common.sh@905 -- # return 0 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.819 21:40:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:35.081 21:40:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:35.081 "name": "Existed_Raid", 00:18:35.081 "uuid": "2ffb73e6-c423-4e9d-a78c-2c77ed9574a3", 00:18:35.081 "strip_size_kb": 0, 00:18:35.081 "state": "online", 00:18:35.081 "raid_level": "raid1", 00:18:35.081 "superblock": true, 00:18:35.081 "num_base_bdevs": 4, 00:18:35.081 "num_base_bdevs_discovered": 4, 00:18:35.081 "num_base_bdevs_operational": 4, 00:18:35.081 "base_bdevs_list": [ 00:18:35.081 { 00:18:35.081 "name": "BaseBdev1", 00:18:35.081 "uuid": "2c2981db-42e7-4228-882c-65e75dd5d1cc", 00:18:35.081 "is_configured": true, 00:18:35.081 "data_offset": 2048, 00:18:35.081 "data_size": 63488 00:18:35.081 }, 00:18:35.081 { 00:18:35.081 "name": "BaseBdev2", 00:18:35.081 "uuid": "9e186cdf-34b6-434c-a733-719cf62eca01", 00:18:35.081 "is_configured": true, 00:18:35.081 "data_offset": 2048, 00:18:35.081 "data_size": 63488 00:18:35.081 }, 00:18:35.081 { 00:18:35.081 "name": "BaseBdev3", 00:18:35.081 "uuid": "cca1a510-f73f-42b1-9819-641ad7be2d66", 00:18:35.081 "is_configured": true, 00:18:35.081 "data_offset": 2048, 00:18:35.081 "data_size": 63488 00:18:35.081 }, 00:18:35.081 { 00:18:35.081 "name": "BaseBdev4", 00:18:35.081 "uuid": "08293a55-5cf9-4c1f-921a-090589426725", 00:18:35.081 "is_configured": true, 00:18:35.081 "data_offset": 2048, 00:18:35.081 "data_size": 63488 00:18:35.081 } 00:18:35.081 ] 00:18:35.081 }' 00:18:35.081 21:40:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:35.081 21:40:55 -- common/autotest_common.sh@10 -- # set +x 00:18:35.339 21:40:55 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:35.597 [2024-12-06 21:40:56.016529] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:35.856 21:40:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:36.115 21:40:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:36.115 "name": "Existed_Raid", 00:18:36.115 "uuid": "2ffb73e6-c423-4e9d-a78c-2c77ed9574a3", 00:18:36.115 "strip_size_kb": 0, 00:18:36.115 "state": "online", 00:18:36.115 "raid_level": "raid1", 00:18:36.115 "superblock": true, 00:18:36.115 "num_base_bdevs": 4, 00:18:36.115 "num_base_bdevs_discovered": 3, 00:18:36.115 "num_base_bdevs_operational": 3, 00:18:36.115 "base_bdevs_list": [ 00:18:36.115 { 00:18:36.115 "name": null, 00:18:36.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.115 "is_configured": false, 00:18:36.115 "data_offset": 2048, 00:18:36.115 "data_size": 63488 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "name": "BaseBdev2", 00:18:36.115 "uuid": "9e186cdf-34b6-434c-a733-719cf62eca01", 00:18:36.115 "is_configured": true, 00:18:36.115 "data_offset": 2048, 00:18:36.115 "data_size": 63488 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "name": "BaseBdev3", 00:18:36.115 "uuid": "cca1a510-f73f-42b1-9819-641ad7be2d66", 00:18:36.115 "is_configured": true, 00:18:36.115 "data_offset": 2048, 00:18:36.115 "data_size": 63488 00:18:36.115 }, 00:18:36.115 { 00:18:36.115 "name": "BaseBdev4", 00:18:36.115 "uuid": "08293a55-5cf9-4c1f-921a-090589426725", 00:18:36.115 "is_configured": true, 00:18:36.115 "data_offset": 2048, 00:18:36.115 "data_size": 63488 00:18:36.115 } 00:18:36.115 ] 00:18:36.115 }' 00:18:36.115 21:40:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:36.115 21:40:56 -- common/autotest_common.sh@10 -- # set +x 00:18:36.374 21:40:56 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:36.374 21:40:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:36.374 21:40:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:36.374 21:40:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.633 21:40:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:36.633 21:40:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.633 21:40:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:36.633 [2024-12-06 21:40:57.079608] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:36.891 21:40:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:36.892 21:40:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:37.150 [2024-12-06 21:40:57.588947] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:37.409 21:40:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:37.668 [2024-12-06 21:40:58.077553] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:37.668 [2024-12-06 21:40:58.077804] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.668 [2024-12-06 21:40:58.077980] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.668 [2024-12-06 21:40:58.150966] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.668 [2024-12-06 21:40:58.151000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:37.927 21:40:58 -- bdev/bdev_raid.sh@287 -- # killprocess 76975 00:18:37.927 21:40:58 -- common/autotest_common.sh@936 -- # '[' -z 76975 ']' 00:18:37.927 21:40:58 -- common/autotest_common.sh@940 -- # kill -0 76975 00:18:37.927 21:40:58 -- common/autotest_common.sh@941 -- # uname 00:18:38.186 21:40:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:38.186 21:40:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76975 00:18:38.186 killing process with pid 76975 00:18:38.186 21:40:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:38.186 21:40:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:38.186 21:40:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76975' 00:18:38.186 21:40:58 -- common/autotest_common.sh@955 -- # kill 76975 00:18:38.186 [2024-12-06 21:40:58.447998] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:38.186 21:40:58 -- common/autotest_common.sh@960 -- # wait 76975 00:18:38.186 [2024-12-06 21:40:58.448094] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.123 ************************************ 00:18:39.123 END TEST raid_state_function_test_sb 00:18:39.123 ************************************ 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:39.123 00:18:39.123 real 0m12.540s 00:18:39.123 user 0m21.042s 00:18:39.123 sys 0m1.801s 00:18:39.123 21:40:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:39.123 21:40:59 -- common/autotest_common.sh@10 -- # set +x 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:18:39.123 21:40:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:39.123 21:40:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.123 21:40:59 -- common/autotest_common.sh@10 -- # set +x 00:18:39.123 ************************************ 00:18:39.123 START TEST raid_superblock_test 00:18:39.123 ************************************ 00:18:39.123 21:40:59 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid1 4 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@357 -- # raid_pid=77377 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:39.123 21:40:59 -- bdev/bdev_raid.sh@358 -- # waitforlisten 77377 /var/tmp/spdk-raid.sock 00:18:39.123 21:40:59 -- common/autotest_common.sh@829 -- # '[' -z 77377 ']' 00:18:39.123 21:40:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:39.123 21:40:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.123 21:40:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:39.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:39.123 21:40:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.123 21:40:59 -- common/autotest_common.sh@10 -- # set +x 00:18:39.123 [2024-12-06 21:40:59.618598] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:39.123 [2024-12-06 21:40:59.618973] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77377 ] 00:18:39.383 [2024-12-06 21:40:59.790931] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.642 [2024-12-06 21:40:59.959282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.642 [2024-12-06 21:41:00.124332] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.209 21:41:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.209 21:41:00 -- common/autotest_common.sh@862 -- # return 0 00:18:40.209 21:41:00 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:40.209 21:41:00 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.210 21:41:00 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:40.468 malloc1 00:18:40.468 21:41:00 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.728 [2024-12-06 21:41:01.010155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.728 [2024-12-06 21:41:01.010244] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.728 [2024-12-06 21:41:01.010283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:18:40.728 [2024-12-06 21:41:01.010298] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.728 [2024-12-06 21:41:01.012828] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.728 [2024-12-06 21:41:01.013030] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.728 pt1 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.728 21:41:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:40.987 malloc2 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.987 [2024-12-06 21:41:01.454422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.987 [2024-12-06 21:41:01.454551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.987 [2024-12-06 21:41:01.454586] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:18:40.987 [2024-12-06 21:41:01.454617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.987 [2024-12-06 21:41:01.457155] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.987 [2024-12-06 21:41:01.457364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.987 pt2 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:40.987 21:41:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:41.247 malloc3 00:18:41.247 21:41:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:41.506 [2024-12-06 21:41:01.902397] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:41.506 [2024-12-06 21:41:01.902507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.506 [2024-12-06 21:41:01.902544] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:18:41.506 [2024-12-06 21:41:01.902575] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.506 [2024-12-06 21:41:01.905225] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.506 [2024-12-06 21:41:01.905470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:41.506 pt3 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:41.506 21:41:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:41.772 malloc4 00:18:41.773 21:41:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:42.034 [2024-12-06 21:41:02.422344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:42.034 [2024-12-06 21:41:02.422643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.034 [2024-12-06 21:41:02.422696] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:18:42.034 [2024-12-06 21:41:02.422715] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.034 [2024-12-06 21:41:02.425324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.034 pt4 00:18:42.034 [2024-12-06 21:41:02.425533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:42.034 21:41:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:42.034 21:41:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:42.034 21:41:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:42.292 [2024-12-06 21:41:02.630476] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:42.292 [2024-12-06 21:41:02.632616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.292 [2024-12-06 21:41:02.632921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:42.292 [2024-12-06 21:41:02.633028] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:42.292 [2024-12-06 21:41:02.633285] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:18:42.292 [2024-12-06 21:41:02.633302] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:42.292 [2024-12-06 21:41:02.633493] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:18:42.292 [2024-12-06 21:41:02.633938] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:18:42.292 [2024-12-06 21:41:02.633965] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:18:42.292 [2024-12-06 21:41:02.634159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.292 21:41:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.550 21:41:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.550 "name": "raid_bdev1", 00:18:42.550 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:42.550 "strip_size_kb": 0, 00:18:42.550 "state": "online", 00:18:42.550 "raid_level": "raid1", 00:18:42.550 "superblock": true, 00:18:42.550 "num_base_bdevs": 4, 00:18:42.550 "num_base_bdevs_discovered": 4, 00:18:42.550 "num_base_bdevs_operational": 4, 00:18:42.550 "base_bdevs_list": [ 00:18:42.550 { 00:18:42.550 "name": "pt1", 00:18:42.550 "uuid": "dc0b31ae-f3b0-55fb-8e02-728f2f48796e", 00:18:42.550 "is_configured": true, 00:18:42.550 "data_offset": 2048, 00:18:42.550 "data_size": 63488 00:18:42.550 }, 00:18:42.550 { 00:18:42.550 "name": "pt2", 00:18:42.550 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:42.550 "is_configured": true, 00:18:42.550 "data_offset": 2048, 00:18:42.550 "data_size": 63488 00:18:42.550 }, 00:18:42.551 { 00:18:42.551 "name": "pt3", 00:18:42.551 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:42.551 "is_configured": true, 00:18:42.551 "data_offset": 2048, 00:18:42.551 "data_size": 63488 00:18:42.551 }, 00:18:42.551 { 00:18:42.551 "name": "pt4", 00:18:42.551 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:42.551 "is_configured": true, 00:18:42.551 "data_offset": 2048, 00:18:42.551 "data_size": 63488 00:18:42.551 } 00:18:42.551 ] 00:18:42.551 }' 00:18:42.551 21:41:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.551 21:41:02 -- common/autotest_common.sh@10 -- # set +x 00:18:42.810 21:41:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:42.810 21:41:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:43.074 [2024-12-06 21:41:03.326906] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.074 21:41:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e 00:18:43.074 21:41:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e ']' 00:18:43.074 21:41:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:43.333 [2024-12-06 21:41:03.578709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:43.333 [2024-12-06 21:41:03.578741] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.333 [2024-12-06 21:41:03.578812] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.333 [2024-12-06 21:41:03.578902] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.333 [2024-12-06 21:41:03.578915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:18:43.333 21:41:03 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.333 21:41:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:43.592 21:41:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:43.592 21:41:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:43.592 21:41:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.592 21:41:03 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:43.592 21:41:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.592 21:41:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:43.850 21:41:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:43.850 21:41:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:44.168 21:41:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:44.168 21:41:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:44.443 21:41:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:44.443 21:41:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:44.443 21:41:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:44.443 21:41:04 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:44.443 21:41:04 -- common/autotest_common.sh@650 -- # local es=0 00:18:44.443 21:41:04 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:44.443 21:41:04 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.443 21:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.443 21:41:04 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.443 21:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.443 21:41:04 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.443 21:41:04 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.443 21:41:04 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:44.443 21:41:04 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:44.443 21:41:04 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:44.702 [2024-12-06 21:41:05.099399] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:44.702 [2024-12-06 21:41:05.101534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:44.702 [2024-12-06 21:41:05.101598] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:44.702 [2024-12-06 21:41:05.101645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:44.702 [2024-12-06 21:41:05.101720] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:44.702 [2024-12-06 21:41:05.101795] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:44.702 [2024-12-06 21:41:05.101826] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:44.702 [2024-12-06 21:41:05.101851] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:44.702 [2024-12-06 21:41:05.101872] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:44.702 [2024-12-06 21:41:05.101884] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:18:44.702 request: 00:18:44.702 { 00:18:44.702 "name": "raid_bdev1", 00:18:44.702 "raid_level": "raid1", 00:18:44.702 "base_bdevs": [ 00:18:44.702 "malloc1", 00:18:44.702 "malloc2", 00:18:44.702 "malloc3", 00:18:44.702 "malloc4" 00:18:44.702 ], 00:18:44.702 "superblock": false, 00:18:44.702 "method": "bdev_raid_create", 00:18:44.702 "req_id": 1 00:18:44.702 } 00:18:44.702 Got JSON-RPC error response 00:18:44.702 response: 00:18:44.702 { 00:18:44.702 "code": -17, 00:18:44.702 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:44.702 } 00:18:44.702 21:41:05 -- common/autotest_common.sh@653 -- # es=1 00:18:44.702 21:41:05 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.702 21:41:05 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.702 21:41:05 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.702 21:41:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:44.702 21:41:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:44.959 21:41:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:44.959 21:41:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:44.959 21:41:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.218 [2024-12-06 21:41:05.571421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.218 [2024-12-06 21:41:05.571541] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.218 [2024-12-06 21:41:05.571576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:18:45.218 [2024-12-06 21:41:05.571591] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.218 [2024-12-06 21:41:05.574055] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.218 [2024-12-06 21:41:05.574095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.218 [2024-12-06 21:41:05.574208] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:45.218 [2024-12-06 21:41:05.574263] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.218 pt1 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.218 21:41:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:45.477 21:41:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:45.477 "name": "raid_bdev1", 00:18:45.477 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:45.477 "strip_size_kb": 0, 00:18:45.477 "state": "configuring", 00:18:45.477 "raid_level": "raid1", 00:18:45.477 "superblock": true, 00:18:45.477 "num_base_bdevs": 4, 00:18:45.477 "num_base_bdevs_discovered": 1, 00:18:45.477 "num_base_bdevs_operational": 4, 00:18:45.477 "base_bdevs_list": [ 00:18:45.477 { 00:18:45.477 "name": "pt1", 00:18:45.477 "uuid": "dc0b31ae-f3b0-55fb-8e02-728f2f48796e", 00:18:45.477 "is_configured": true, 00:18:45.477 "data_offset": 2048, 00:18:45.477 "data_size": 63488 00:18:45.477 }, 00:18:45.477 { 00:18:45.477 "name": null, 00:18:45.477 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:45.477 "is_configured": false, 00:18:45.477 "data_offset": 2048, 00:18:45.477 "data_size": 63488 00:18:45.477 }, 00:18:45.477 { 00:18:45.477 "name": null, 00:18:45.477 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:45.477 "is_configured": false, 00:18:45.477 "data_offset": 2048, 00:18:45.477 "data_size": 63488 00:18:45.477 }, 00:18:45.477 { 00:18:45.477 "name": null, 00:18:45.477 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:45.477 "is_configured": false, 00:18:45.477 "data_offset": 2048, 00:18:45.477 "data_size": 63488 00:18:45.477 } 00:18:45.477 ] 00:18:45.477 }' 00:18:45.477 21:41:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:45.477 21:41:05 -- common/autotest_common.sh@10 -- # set +x 00:18:45.735 21:41:06 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:45.735 21:41:06 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.995 [2024-12-06 21:41:06.343641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.995 [2024-12-06 21:41:06.343746] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.995 [2024-12-06 21:41:06.343783] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:18:45.995 [2024-12-06 21:41:06.343812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.995 [2024-12-06 21:41:06.344267] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.995 [2024-12-06 21:41:06.344292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.995 [2024-12-06 21:41:06.344395] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:45.995 [2024-12-06 21:41:06.344424] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.995 pt2 00:18:45.995 21:41:06 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:46.254 [2024-12-06 21:41:06.559778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:46.254 21:41:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.513 21:41:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:46.513 "name": "raid_bdev1", 00:18:46.513 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:46.513 "strip_size_kb": 0, 00:18:46.513 "state": "configuring", 00:18:46.513 "raid_level": "raid1", 00:18:46.513 "superblock": true, 00:18:46.513 "num_base_bdevs": 4, 00:18:46.513 "num_base_bdevs_discovered": 1, 00:18:46.513 "num_base_bdevs_operational": 4, 00:18:46.513 "base_bdevs_list": [ 00:18:46.513 { 00:18:46.514 "name": "pt1", 00:18:46.514 "uuid": "dc0b31ae-f3b0-55fb-8e02-728f2f48796e", 00:18:46.514 "is_configured": true, 00:18:46.514 "data_offset": 2048, 00:18:46.514 "data_size": 63488 00:18:46.514 }, 00:18:46.514 { 00:18:46.514 "name": null, 00:18:46.514 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:46.514 "is_configured": false, 00:18:46.514 "data_offset": 2048, 00:18:46.514 "data_size": 63488 00:18:46.514 }, 00:18:46.514 { 00:18:46.514 "name": null, 00:18:46.514 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:46.514 "is_configured": false, 00:18:46.514 "data_offset": 2048, 00:18:46.514 "data_size": 63488 00:18:46.514 }, 00:18:46.514 { 00:18:46.514 "name": null, 00:18:46.514 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:46.514 "is_configured": false, 00:18:46.514 "data_offset": 2048, 00:18:46.514 "data_size": 63488 00:18:46.514 } 00:18:46.514 ] 00:18:46.514 }' 00:18:46.514 21:41:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:46.514 21:41:06 -- common/autotest_common.sh@10 -- # set +x 00:18:46.773 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:46.773 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:46.773 21:41:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:47.033 [2024-12-06 21:41:07.356009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:47.033 [2024-12-06 21:41:07.356096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.033 [2024-12-06 21:41:07.356125] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:18:47.033 [2024-12-06 21:41:07.356140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.033 [2024-12-06 21:41:07.356676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.033 [2024-12-06 21:41:07.356707] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:47.033 [2024-12-06 21:41:07.356805] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:47.033 [2024-12-06 21:41:07.356847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.033 pt2 00:18:47.033 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:47.033 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:47.033 21:41:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:47.292 [2024-12-06 21:41:07.604150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:47.292 [2024-12-06 21:41:07.604257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.292 [2024-12-06 21:41:07.604291] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:18:47.292 [2024-12-06 21:41:07.604307] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.292 [2024-12-06 21:41:07.604797] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.292 [2024-12-06 21:41:07.605015] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:47.292 [2024-12-06 21:41:07.605129] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:47.292 [2024-12-06 21:41:07.605164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:47.292 pt3 00:18:47.292 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:47.292 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:47.292 21:41:07 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:47.552 [2024-12-06 21:41:07.824140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:47.552 [2024-12-06 21:41:07.824226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.552 [2024-12-06 21:41:07.824284] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:18:47.552 [2024-12-06 21:41:07.824302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.552 [2024-12-06 21:41:07.824871] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.552 [2024-12-06 21:41:07.825050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:47.552 [2024-12-06 21:41:07.825178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:47.552 [2024-12-06 21:41:07.825221] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:47.552 [2024-12-06 21:41:07.825412] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:18:47.552 [2024-12-06 21:41:07.825433] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:47.552 [2024-12-06 21:41:07.825568] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:18:47.552 [2024-12-06 21:41:07.825948] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:18:47.552 [2024-12-06 21:41:07.825963] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:18:47.552 [2024-12-06 21:41:07.826100] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.552 pt4 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.552 21:41:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.553 21:41:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.553 21:41:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.812 21:41:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.812 "name": "raid_bdev1", 00:18:47.812 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:47.812 "strip_size_kb": 0, 00:18:47.812 "state": "online", 00:18:47.812 "raid_level": "raid1", 00:18:47.812 "superblock": true, 00:18:47.812 "num_base_bdevs": 4, 00:18:47.812 "num_base_bdevs_discovered": 4, 00:18:47.812 "num_base_bdevs_operational": 4, 00:18:47.812 "base_bdevs_list": [ 00:18:47.812 { 00:18:47.812 "name": "pt1", 00:18:47.812 "uuid": "dc0b31ae-f3b0-55fb-8e02-728f2f48796e", 00:18:47.812 "is_configured": true, 00:18:47.812 "data_offset": 2048, 00:18:47.812 "data_size": 63488 00:18:47.812 }, 00:18:47.812 { 00:18:47.812 "name": "pt2", 00:18:47.812 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:47.812 "is_configured": true, 00:18:47.812 "data_offset": 2048, 00:18:47.812 "data_size": 63488 00:18:47.812 }, 00:18:47.812 { 00:18:47.812 "name": "pt3", 00:18:47.812 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:47.812 "is_configured": true, 00:18:47.812 "data_offset": 2048, 00:18:47.812 "data_size": 63488 00:18:47.812 }, 00:18:47.812 { 00:18:47.812 "name": "pt4", 00:18:47.812 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:47.812 "is_configured": true, 00:18:47.812 "data_offset": 2048, 00:18:47.812 "data_size": 63488 00:18:47.812 } 00:18:47.812 ] 00:18:47.812 }' 00:18:47.812 21:41:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.812 21:41:08 -- common/autotest_common.sh@10 -- # set +x 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:48.071 [2024-12-06 21:41:08.544707] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@430 -- # '[' 1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e '!=' 1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e ']' 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:48.071 21:41:08 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:48.329 [2024-12-06 21:41:08.800479] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:48.329 21:41:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.587 21:41:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:48.587 "name": "raid_bdev1", 00:18:48.587 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:48.587 "strip_size_kb": 0, 00:18:48.587 "state": "online", 00:18:48.587 "raid_level": "raid1", 00:18:48.587 "superblock": true, 00:18:48.587 "num_base_bdevs": 4, 00:18:48.587 "num_base_bdevs_discovered": 3, 00:18:48.587 "num_base_bdevs_operational": 3, 00:18:48.587 "base_bdevs_list": [ 00:18:48.587 { 00:18:48.587 "name": null, 00:18:48.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.587 "is_configured": false, 00:18:48.587 "data_offset": 2048, 00:18:48.587 "data_size": 63488 00:18:48.587 }, 00:18:48.587 { 00:18:48.587 "name": "pt2", 00:18:48.587 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:48.587 "is_configured": true, 00:18:48.587 "data_offset": 2048, 00:18:48.587 "data_size": 63488 00:18:48.587 }, 00:18:48.587 { 00:18:48.587 "name": "pt3", 00:18:48.587 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:48.587 "is_configured": true, 00:18:48.587 "data_offset": 2048, 00:18:48.587 "data_size": 63488 00:18:48.587 }, 00:18:48.587 { 00:18:48.587 "name": "pt4", 00:18:48.587 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:48.587 "is_configured": true, 00:18:48.587 "data_offset": 2048, 00:18:48.587 "data_size": 63488 00:18:48.587 } 00:18:48.587 ] 00:18:48.587 }' 00:18:48.587 21:41:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:48.587 21:41:09 -- common/autotest_common.sh@10 -- # set +x 00:18:49.153 21:41:09 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:49.153 [2024-12-06 21:41:09.624667] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:49.153 [2024-12-06 21:41:09.624871] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:49.153 [2024-12-06 21:41:09.624961] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.153 [2024-12-06 21:41:09.625051] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.153 [2024-12-06 21:41:09.625066] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:18:49.153 21:41:09 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.153 21:41:09 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:18:49.411 21:41:09 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:18:49.411 21:41:09 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:18:49.411 21:41:09 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:18:49.411 21:41:09 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:49.411 21:41:09 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:49.669 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:49.669 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:49.669 21:41:10 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:49.928 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:49.928 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:49.928 21:41:10 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:50.186 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:18:50.186 21:41:10 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:18:50.186 21:41:10 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:18:50.186 21:41:10 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:50.186 21:41:10 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:50.445 [2024-12-06 21:41:10.745047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:50.445 [2024-12-06 21:41:10.745171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:50.445 [2024-12-06 21:41:10.745210] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:18:50.445 [2024-12-06 21:41:10.745223] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:50.445 [2024-12-06 21:41:10.748019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:50.445 [2024-12-06 21:41:10.748214] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:50.445 [2024-12-06 21:41:10.748360] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:50.445 [2024-12-06 21:41:10.748412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:50.445 pt2 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.445 21:41:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.704 21:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.704 "name": "raid_bdev1", 00:18:50.704 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:50.704 "strip_size_kb": 0, 00:18:50.704 "state": "configuring", 00:18:50.704 "raid_level": "raid1", 00:18:50.704 "superblock": true, 00:18:50.704 "num_base_bdevs": 4, 00:18:50.704 "num_base_bdevs_discovered": 1, 00:18:50.704 "num_base_bdevs_operational": 3, 00:18:50.704 "base_bdevs_list": [ 00:18:50.704 { 00:18:50.704 "name": null, 00:18:50.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.704 "is_configured": false, 00:18:50.704 "data_offset": 2048, 00:18:50.704 "data_size": 63488 00:18:50.704 }, 00:18:50.704 { 00:18:50.704 "name": "pt2", 00:18:50.704 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:50.704 "is_configured": true, 00:18:50.704 "data_offset": 2048, 00:18:50.704 "data_size": 63488 00:18:50.704 }, 00:18:50.704 { 00:18:50.704 "name": null, 00:18:50.704 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:50.704 "is_configured": false, 00:18:50.704 "data_offset": 2048, 00:18:50.704 "data_size": 63488 00:18:50.704 }, 00:18:50.704 { 00:18:50.704 "name": null, 00:18:50.704 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:50.704 "is_configured": false, 00:18:50.704 "data_offset": 2048, 00:18:50.704 "data_size": 63488 00:18:50.704 } 00:18:50.704 ] 00:18:50.704 }' 00:18:50.704 21:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.704 21:41:11 -- common/autotest_common.sh@10 -- # set +x 00:18:50.962 21:41:11 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:50.962 21:41:11 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:50.962 21:41:11 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:51.220 [2024-12-06 21:41:11.537219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:51.220 [2024-12-06 21:41:11.537297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.220 [2024-12-06 21:41:11.537333] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:18:51.220 [2024-12-06 21:41:11.537348] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.220 [2024-12-06 21:41:11.537845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.220 [2024-12-06 21:41:11.537881] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:51.220 [2024-12-06 21:41:11.537992] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:51.220 [2024-12-06 21:41:11.538020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:51.220 pt3 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.220 21:41:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.478 21:41:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:51.478 "name": "raid_bdev1", 00:18:51.478 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:51.478 "strip_size_kb": 0, 00:18:51.478 "state": "configuring", 00:18:51.478 "raid_level": "raid1", 00:18:51.478 "superblock": true, 00:18:51.478 "num_base_bdevs": 4, 00:18:51.478 "num_base_bdevs_discovered": 2, 00:18:51.478 "num_base_bdevs_operational": 3, 00:18:51.478 "base_bdevs_list": [ 00:18:51.478 { 00:18:51.478 "name": null, 00:18:51.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:51.478 "is_configured": false, 00:18:51.478 "data_offset": 2048, 00:18:51.478 "data_size": 63488 00:18:51.478 }, 00:18:51.478 { 00:18:51.478 "name": "pt2", 00:18:51.478 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:51.478 "is_configured": true, 00:18:51.478 "data_offset": 2048, 00:18:51.478 "data_size": 63488 00:18:51.478 }, 00:18:51.478 { 00:18:51.478 "name": "pt3", 00:18:51.478 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:51.478 "is_configured": true, 00:18:51.478 "data_offset": 2048, 00:18:51.478 "data_size": 63488 00:18:51.478 }, 00:18:51.478 { 00:18:51.478 "name": null, 00:18:51.478 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:51.478 "is_configured": false, 00:18:51.478 "data_offset": 2048, 00:18:51.478 "data_size": 63488 00:18:51.478 } 00:18:51.478 ] 00:18:51.478 }' 00:18:51.478 21:41:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:51.478 21:41:11 -- common/autotest_common.sh@10 -- # set +x 00:18:51.737 21:41:12 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:18:51.737 21:41:12 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:18:51.737 21:41:12 -- bdev/bdev_raid.sh@462 -- # i=3 00:18:51.737 21:41:12 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:51.995 [2024-12-06 21:41:12.293403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:51.995 [2024-12-06 21:41:12.293512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.995 [2024-12-06 21:41:12.293554] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:18:51.995 [2024-12-06 21:41:12.293569] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.995 [2024-12-06 21:41:12.294067] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.995 [2024-12-06 21:41:12.294098] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:51.995 [2024-12-06 21:41:12.294199] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:51.995 [2024-12-06 21:41:12.294253] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:51.995 [2024-12-06 21:41:12.294411] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:18:51.995 [2024-12-06 21:41:12.294426] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:51.995 [2024-12-06 21:41:12.294547] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:18:51.995 [2024-12-06 21:41:12.294928] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:18:51.995 [2024-12-06 21:41:12.294956] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:18:51.995 [2024-12-06 21:41:12.295107] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:51.995 pt4 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:51.995 21:41:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:51.996 21:41:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:51.996 21:41:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:51.996 21:41:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:51.996 21:41:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:51.996 21:41:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.254 21:41:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.254 "name": "raid_bdev1", 00:18:52.254 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:52.254 "strip_size_kb": 0, 00:18:52.254 "state": "online", 00:18:52.254 "raid_level": "raid1", 00:18:52.254 "superblock": true, 00:18:52.254 "num_base_bdevs": 4, 00:18:52.254 "num_base_bdevs_discovered": 3, 00:18:52.254 "num_base_bdevs_operational": 3, 00:18:52.254 "base_bdevs_list": [ 00:18:52.254 { 00:18:52.254 "name": null, 00:18:52.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.254 "is_configured": false, 00:18:52.254 "data_offset": 2048, 00:18:52.254 "data_size": 63488 00:18:52.254 }, 00:18:52.254 { 00:18:52.254 "name": "pt2", 00:18:52.254 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:52.254 "is_configured": true, 00:18:52.254 "data_offset": 2048, 00:18:52.254 "data_size": 63488 00:18:52.254 }, 00:18:52.254 { 00:18:52.254 "name": "pt3", 00:18:52.254 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:52.254 "is_configured": true, 00:18:52.254 "data_offset": 2048, 00:18:52.254 "data_size": 63488 00:18:52.254 }, 00:18:52.254 { 00:18:52.254 "name": "pt4", 00:18:52.254 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:52.254 "is_configured": true, 00:18:52.254 "data_offset": 2048, 00:18:52.254 "data_size": 63488 00:18:52.254 } 00:18:52.254 ] 00:18:52.254 }' 00:18:52.254 21:41:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.254 21:41:12 -- common/autotest_common.sh@10 -- # set +x 00:18:52.513 21:41:12 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:18:52.513 21:41:12 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:52.771 [2024-12-06 21:41:13.053582] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:52.771 [2024-12-06 21:41:13.053620] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:52.771 [2024-12-06 21:41:13.053705] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:52.771 [2024-12-06 21:41:13.053791] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:52.771 [2024-12-06 21:41:13.053812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:18:52.771 21:41:13 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.771 21:41:13 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:18:53.030 21:41:13 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:18:53.030 21:41:13 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:18:53.030 21:41:13 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:53.288 [2024-12-06 21:41:13.533693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:53.288 [2024-12-06 21:41:13.533916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:53.288 [2024-12-06 21:41:13.534067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:18:53.288 [2024-12-06 21:41:13.534098] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:53.288 [2024-12-06 21:41:13.536598] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:53.288 [2024-12-06 21:41:13.536647] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:53.288 [2024-12-06 21:41:13.536750] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:53.288 [2024-12-06 21:41:13.536817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:53.288 pt1 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:53.288 21:41:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.547 21:41:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:53.547 "name": "raid_bdev1", 00:18:53.547 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:53.547 "strip_size_kb": 0, 00:18:53.547 "state": "configuring", 00:18:53.547 "raid_level": "raid1", 00:18:53.547 "superblock": true, 00:18:53.547 "num_base_bdevs": 4, 00:18:53.547 "num_base_bdevs_discovered": 1, 00:18:53.547 "num_base_bdevs_operational": 4, 00:18:53.547 "base_bdevs_list": [ 00:18:53.547 { 00:18:53.547 "name": "pt1", 00:18:53.547 "uuid": "dc0b31ae-f3b0-55fb-8e02-728f2f48796e", 00:18:53.547 "is_configured": true, 00:18:53.547 "data_offset": 2048, 00:18:53.547 "data_size": 63488 00:18:53.547 }, 00:18:53.547 { 00:18:53.547 "name": null, 00:18:53.547 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:53.547 "is_configured": false, 00:18:53.547 "data_offset": 2048, 00:18:53.547 "data_size": 63488 00:18:53.547 }, 00:18:53.547 { 00:18:53.547 "name": null, 00:18:53.547 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:53.547 "is_configured": false, 00:18:53.547 "data_offset": 2048, 00:18:53.547 "data_size": 63488 00:18:53.547 }, 00:18:53.547 { 00:18:53.547 "name": null, 00:18:53.547 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:53.547 "is_configured": false, 00:18:53.547 "data_offset": 2048, 00:18:53.547 "data_size": 63488 00:18:53.547 } 00:18:53.547 ] 00:18:53.547 }' 00:18:53.547 21:41:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:53.547 21:41:13 -- common/autotest_common.sh@10 -- # set +x 00:18:53.805 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:18:53.805 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:53.805 21:41:14 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:54.063 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:54.063 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:54.063 21:41:14 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@489 -- # i=3 00:18:54.321 21:41:14 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:54.580 [2024-12-06 21:41:14.990069] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:54.580 [2024-12-06 21:41:14.990321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:54.580 [2024-12-06 21:41:14.990369] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:18:54.580 [2024-12-06 21:41:14.990391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:54.580 [2024-12-06 21:41:14.990900] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:54.580 [2024-12-06 21:41:14.990937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:54.580 [2024-12-06 21:41:14.991040] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:54.580 [2024-12-06 21:41:14.991066] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:54.580 [2024-12-06 21:41:14.991079] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:54.580 [2024-12-06 21:41:14.991107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:18:54.580 [2024-12-06 21:41:14.991181] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:54.580 pt4 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.580 21:41:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.837 21:41:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.837 "name": "raid_bdev1", 00:18:54.837 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:54.837 "strip_size_kb": 0, 00:18:54.837 "state": "configuring", 00:18:54.837 "raid_level": "raid1", 00:18:54.837 "superblock": true, 00:18:54.837 "num_base_bdevs": 4, 00:18:54.837 "num_base_bdevs_discovered": 1, 00:18:54.837 "num_base_bdevs_operational": 3, 00:18:54.837 "base_bdevs_list": [ 00:18:54.837 { 00:18:54.837 "name": null, 00:18:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.837 "is_configured": false, 00:18:54.837 "data_offset": 2048, 00:18:54.837 "data_size": 63488 00:18:54.837 }, 00:18:54.837 { 00:18:54.837 "name": null, 00:18:54.837 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:54.837 "is_configured": false, 00:18:54.837 "data_offset": 2048, 00:18:54.837 "data_size": 63488 00:18:54.837 }, 00:18:54.837 { 00:18:54.837 "name": null, 00:18:54.837 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:54.837 "is_configured": false, 00:18:54.837 "data_offset": 2048, 00:18:54.837 "data_size": 63488 00:18:54.837 }, 00:18:54.837 { 00:18:54.837 "name": "pt4", 00:18:54.837 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:54.837 "is_configured": true, 00:18:54.837 "data_offset": 2048, 00:18:54.837 "data_size": 63488 00:18:54.837 } 00:18:54.837 ] 00:18:54.837 }' 00:18:54.837 21:41:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.837 21:41:15 -- common/autotest_common.sh@10 -- # set +x 00:18:55.094 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:18:55.094 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:55.094 21:41:15 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:55.353 [2024-12-06 21:41:15.722208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:55.353 [2024-12-06 21:41:15.722315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.353 [2024-12-06 21:41:15.722356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:18:55.353 [2024-12-06 21:41:15.722372] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.353 [2024-12-06 21:41:15.722880] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.353 [2024-12-06 21:41:15.722915] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:55.353 [2024-12-06 21:41:15.723018] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:55.353 [2024-12-06 21:41:15.723045] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:55.353 pt2 00:18:55.353 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:55.353 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:55.353 21:41:15 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:55.610 [2024-12-06 21:41:15.935873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:55.610 [2024-12-06 21:41:15.936122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:55.610 [2024-12-06 21:41:15.936179] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:18:55.610 [2024-12-06 21:41:15.936199] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:55.610 [2024-12-06 21:41:15.936825] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:55.610 [2024-12-06 21:41:15.936874] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:55.610 [2024-12-06 21:41:15.937008] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:55.610 [2024-12-06 21:41:15.937053] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:55.610 [2024-12-06 21:41:15.937234] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:18:55.610 [2024-12-06 21:41:15.937253] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:18:55.610 [2024-12-06 21:41:15.937387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:18:55.610 [2024-12-06 21:41:15.937829] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:18:55.610 [2024-12-06 21:41:15.937863] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:18:55.610 [2024-12-06 21:41:15.938050] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.610 pt3 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.610 21:41:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.868 21:41:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.868 "name": "raid_bdev1", 00:18:55.868 "uuid": "1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e", 00:18:55.868 "strip_size_kb": 0, 00:18:55.868 "state": "online", 00:18:55.868 "raid_level": "raid1", 00:18:55.868 "superblock": true, 00:18:55.868 "num_base_bdevs": 4, 00:18:55.868 "num_base_bdevs_discovered": 3, 00:18:55.868 "num_base_bdevs_operational": 3, 00:18:55.868 "base_bdevs_list": [ 00:18:55.868 { 00:18:55.868 "name": null, 00:18:55.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.868 "is_configured": false, 00:18:55.868 "data_offset": 2048, 00:18:55.868 "data_size": 63488 00:18:55.868 }, 00:18:55.868 { 00:18:55.868 "name": "pt2", 00:18:55.868 "uuid": "3af9e846-e71a-57d0-9f35-bcd4566a9da2", 00:18:55.868 "is_configured": true, 00:18:55.868 "data_offset": 2048, 00:18:55.868 "data_size": 63488 00:18:55.868 }, 00:18:55.868 { 00:18:55.868 "name": "pt3", 00:18:55.868 "uuid": "0666e232-eb62-5e50-9b1e-6262f5d6d95c", 00:18:55.868 "is_configured": true, 00:18:55.868 "data_offset": 2048, 00:18:55.868 "data_size": 63488 00:18:55.868 }, 00:18:55.868 { 00:18:55.868 "name": "pt4", 00:18:55.868 "uuid": "ebb1e7fe-968a-5b7b-b1f6-1cf99c03cfed", 00:18:55.868 "is_configured": true, 00:18:55.868 "data_offset": 2048, 00:18:55.868 "data_size": 63488 00:18:55.868 } 00:18:55.868 ] 00:18:55.868 }' 00:18:55.869 21:41:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.869 21:41:16 -- common/autotest_common.sh@10 -- # set +x 00:18:56.127 21:41:16 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:56.127 21:41:16 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:18:56.386 [2024-12-06 21:41:16.688316] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:56.386 21:41:16 -- bdev/bdev_raid.sh@506 -- # '[' 1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e '!=' 1c95bf3e-6ed2-4ece-8884-8ac49bd7ab3e ']' 00:18:56.386 21:41:16 -- bdev/bdev_raid.sh@511 -- # killprocess 77377 00:18:56.386 21:41:16 -- common/autotest_common.sh@936 -- # '[' -z 77377 ']' 00:18:56.386 21:41:16 -- common/autotest_common.sh@940 -- # kill -0 77377 00:18:56.386 21:41:16 -- common/autotest_common.sh@941 -- # uname 00:18:56.386 21:41:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.386 21:41:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77377 00:18:56.386 killing process with pid 77377 00:18:56.386 21:41:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:56.386 21:41:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:56.386 21:41:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77377' 00:18:56.386 21:41:16 -- common/autotest_common.sh@955 -- # kill 77377 00:18:56.386 21:41:16 -- common/autotest_common.sh@960 -- # wait 77377 00:18:56.386 [2024-12-06 21:41:16.736273] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:56.386 [2024-12-06 21:41:16.736371] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:56.386 [2024-12-06 21:41:16.736756] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:56.386 [2024-12-06 21:41:16.736791] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:18:56.644 [2024-12-06 21:41:17.061093] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:58.021 00:18:58.021 real 0m18.650s 00:18:58.021 user 0m32.331s 00:18:58.021 sys 0m2.666s 00:18:58.021 ************************************ 00:18:58.021 END TEST raid_superblock_test 00:18:58.021 ************************************ 00:18:58.021 21:41:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:58.021 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:18:58.021 21:41:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:58.021 21:41:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:58.021 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:18:58.021 ************************************ 00:18:58.021 START TEST raid_rebuild_test 00:18:58.021 ************************************ 00:18:58.021 21:41:18 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false false 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@544 -- # raid_pid=77985 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:58.021 21:41:18 -- bdev/bdev_raid.sh@545 -- # waitforlisten 77985 /var/tmp/spdk-raid.sock 00:18:58.021 21:41:18 -- common/autotest_common.sh@829 -- # '[' -z 77985 ']' 00:18:58.021 21:41:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:58.021 21:41:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:58.021 21:41:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:58.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:58.021 21:41:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:58.021 21:41:18 -- common/autotest_common.sh@10 -- # set +x 00:18:58.021 [2024-12-06 21:41:18.326305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:58.021 [2024-12-06 21:41:18.326489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77985 ] 00:18:58.021 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:58.021 Zero copy mechanism will not be used. 00:18:58.021 [2024-12-06 21:41:18.494687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.280 [2024-12-06 21:41:18.684373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.538 [2024-12-06 21:41:18.865276] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:58.796 21:41:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:58.796 21:41:19 -- common/autotest_common.sh@862 -- # return 0 00:18:58.796 21:41:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:58.796 21:41:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:58.796 21:41:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:59.055 BaseBdev1 00:18:59.055 21:41:19 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:18:59.055 21:41:19 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:18:59.055 21:41:19 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:59.313 BaseBdev2 00:18:59.313 21:41:19 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:18:59.572 spare_malloc 00:18:59.572 21:41:19 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:59.831 spare_delay 00:18:59.831 21:41:20 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:00.090 [2024-12-06 21:41:20.397560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:00.090 [2024-12-06 21:41:20.397648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:00.090 [2024-12-06 21:41:20.397683] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:00.090 [2024-12-06 21:41:20.397701] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:00.090 [2024-12-06 21:41:20.400293] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:00.090 [2024-12-06 21:41:20.400345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:00.090 spare 00:19:00.090 21:41:20 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:00.349 [2024-12-06 21:41:20.605681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:00.349 [2024-12-06 21:41:20.607797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:00.349 [2024-12-06 21:41:20.607901] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:00.349 [2024-12-06 21:41:20.607924] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:00.349 [2024-12-06 21:41:20.608077] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:00.349 [2024-12-06 21:41:20.608511] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:00.349 [2024-12-06 21:41:20.608529] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:00.349 [2024-12-06 21:41:20.608724] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:00.349 21:41:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.607 21:41:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:00.607 "name": "raid_bdev1", 00:19:00.607 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:00.607 "strip_size_kb": 0, 00:19:00.607 "state": "online", 00:19:00.607 "raid_level": "raid1", 00:19:00.607 "superblock": false, 00:19:00.607 "num_base_bdevs": 2, 00:19:00.607 "num_base_bdevs_discovered": 2, 00:19:00.607 "num_base_bdevs_operational": 2, 00:19:00.607 "base_bdevs_list": [ 00:19:00.607 { 00:19:00.607 "name": "BaseBdev1", 00:19:00.607 "uuid": "ae5f4079-f15b-4f63-a830-84da6845cbae", 00:19:00.607 "is_configured": true, 00:19:00.607 "data_offset": 0, 00:19:00.607 "data_size": 65536 00:19:00.607 }, 00:19:00.607 { 00:19:00.607 "name": "BaseBdev2", 00:19:00.607 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:00.607 "is_configured": true, 00:19:00.607 "data_offset": 0, 00:19:00.607 "data_size": 65536 00:19:00.607 } 00:19:00.607 ] 00:19:00.607 }' 00:19:00.607 21:41:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:00.607 21:41:20 -- common/autotest_common.sh@10 -- # set +x 00:19:00.867 21:41:21 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:00.867 21:41:21 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:01.126 [2024-12-06 21:41:21.378044] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:01.126 21:41:21 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@12 -- # local i 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.126 21:41:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:01.386 [2024-12-06 21:41:21.798037] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:01.386 /dev/nbd0 00:19:01.386 21:41:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:01.386 21:41:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:01.386 21:41:21 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:01.386 21:41:21 -- common/autotest_common.sh@867 -- # local i 00:19:01.386 21:41:21 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:01.386 21:41:21 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:01.386 21:41:21 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:01.386 21:41:21 -- common/autotest_common.sh@871 -- # break 00:19:01.386 21:41:21 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:01.386 21:41:21 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:01.386 21:41:21 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:01.386 1+0 records in 00:19:01.386 1+0 records out 00:19:01.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023821 s, 17.2 MB/s 00:19:01.386 21:41:21 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.386 21:41:21 -- common/autotest_common.sh@884 -- # size=4096 00:19:01.386 21:41:21 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:01.386 21:41:21 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:01.386 21:41:21 -- common/autotest_common.sh@887 -- # return 0 00:19:01.386 21:41:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:01.386 21:41:21 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:01.386 21:41:21 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:01.386 21:41:21 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:01.386 21:41:21 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:07.971 65536+0 records in 00:19:07.971 65536+0 records out 00:19:07.971 33554432 bytes (34 MB, 32 MiB) copied, 5.43124 s, 6.2 MB/s 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@51 -- # local i 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.971 [2024-12-06 21:41:27.526944] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@41 -- # break 00:19:07.971 21:41:27 -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:07.971 [2024-12-06 21:41:27.764045] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.971 21:41:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.971 21:41:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:07.971 "name": "raid_bdev1", 00:19:07.971 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:07.971 "strip_size_kb": 0, 00:19:07.971 "state": "online", 00:19:07.971 "raid_level": "raid1", 00:19:07.971 "superblock": false, 00:19:07.971 "num_base_bdevs": 2, 00:19:07.971 "num_base_bdevs_discovered": 1, 00:19:07.971 "num_base_bdevs_operational": 1, 00:19:07.971 "base_bdevs_list": [ 00:19:07.971 { 00:19:07.971 "name": null, 00:19:07.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.972 "is_configured": false, 00:19:07.972 "data_offset": 0, 00:19:07.972 "data_size": 65536 00:19:07.972 }, 00:19:07.972 { 00:19:07.972 "name": "BaseBdev2", 00:19:07.972 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:07.972 "is_configured": true, 00:19:07.972 "data_offset": 0, 00:19:07.972 "data_size": 65536 00:19:07.972 } 00:19:07.972 ] 00:19:07.972 }' 00:19:07.972 21:41:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:07.972 21:41:28 -- common/autotest_common.sh@10 -- # set +x 00:19:07.972 21:41:28 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.230 [2024-12-06 21:41:28.576296] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:08.230 [2024-12-06 21:41:28.576351] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.230 [2024-12-06 21:41:28.589786] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09480 00:19:08.230 [2024-12-06 21:41:28.591805] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:08.230 21:41:28 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.164 21:41:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.423 21:41:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:09.424 "name": "raid_bdev1", 00:19:09.424 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:09.424 "strip_size_kb": 0, 00:19:09.424 "state": "online", 00:19:09.424 "raid_level": "raid1", 00:19:09.424 "superblock": false, 00:19:09.424 "num_base_bdevs": 2, 00:19:09.424 "num_base_bdevs_discovered": 2, 00:19:09.424 "num_base_bdevs_operational": 2, 00:19:09.424 "process": { 00:19:09.424 "type": "rebuild", 00:19:09.424 "target": "spare", 00:19:09.424 "progress": { 00:19:09.424 "blocks": 24576, 00:19:09.424 "percent": 37 00:19:09.424 } 00:19:09.424 }, 00:19:09.424 "base_bdevs_list": [ 00:19:09.424 { 00:19:09.424 "name": "spare", 00:19:09.424 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:09.424 "is_configured": true, 00:19:09.424 "data_offset": 0, 00:19:09.424 "data_size": 65536 00:19:09.424 }, 00:19:09.424 { 00:19:09.424 "name": "BaseBdev2", 00:19:09.424 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:09.424 "is_configured": true, 00:19:09.424 "data_offset": 0, 00:19:09.424 "data_size": 65536 00:19:09.424 } 00:19:09.424 ] 00:19:09.424 }' 00:19:09.424 21:41:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:09.424 21:41:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.424 21:41:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:09.424 21:41:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.424 21:41:29 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:09.683 [2024-12-06 21:41:30.081928] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.683 [2024-12-06 21:41:30.099038] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.683 [2024-12-06 21:41:30.099123] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.683 21:41:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.941 21:41:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.941 "name": "raid_bdev1", 00:19:09.941 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:09.941 "strip_size_kb": 0, 00:19:09.941 "state": "online", 00:19:09.941 "raid_level": "raid1", 00:19:09.941 "superblock": false, 00:19:09.941 "num_base_bdevs": 2, 00:19:09.941 "num_base_bdevs_discovered": 1, 00:19:09.941 "num_base_bdevs_operational": 1, 00:19:09.941 "base_bdevs_list": [ 00:19:09.941 { 00:19:09.941 "name": null, 00:19:09.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.941 "is_configured": false, 00:19:09.941 "data_offset": 0, 00:19:09.942 "data_size": 65536 00:19:09.942 }, 00:19:09.942 { 00:19:09.942 "name": "BaseBdev2", 00:19:09.942 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:09.942 "is_configured": true, 00:19:09.942 "data_offset": 0, 00:19:09.942 "data_size": 65536 00:19:09.942 } 00:19:09.942 ] 00:19:09.942 }' 00:19:09.942 21:41:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.942 21:41:30 -- common/autotest_common.sh@10 -- # set +x 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.201 21:41:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:10.460 "name": "raid_bdev1", 00:19:10.460 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:10.460 "strip_size_kb": 0, 00:19:10.460 "state": "online", 00:19:10.460 "raid_level": "raid1", 00:19:10.460 "superblock": false, 00:19:10.460 "num_base_bdevs": 2, 00:19:10.460 "num_base_bdevs_discovered": 1, 00:19:10.460 "num_base_bdevs_operational": 1, 00:19:10.460 "base_bdevs_list": [ 00:19:10.460 { 00:19:10.460 "name": null, 00:19:10.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.460 "is_configured": false, 00:19:10.460 "data_offset": 0, 00:19:10.460 "data_size": 65536 00:19:10.460 }, 00:19:10.460 { 00:19:10.460 "name": "BaseBdev2", 00:19:10.460 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:10.460 "is_configured": true, 00:19:10.460 "data_offset": 0, 00:19:10.460 "data_size": 65536 00:19:10.460 } 00:19:10.460 ] 00:19:10.460 }' 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:10.460 21:41:30 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.719 [2024-12-06 21:41:31.151919] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:10.719 [2024-12-06 21:41:31.151981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.719 [2024-12-06 21:41:31.164554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09550 00:19:10.719 [2024-12-06 21:41:31.166723] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:10.719 21:41:31 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.097 21:41:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:12.097 "name": "raid_bdev1", 00:19:12.097 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:12.097 "strip_size_kb": 0, 00:19:12.097 "state": "online", 00:19:12.097 "raid_level": "raid1", 00:19:12.097 "superblock": false, 00:19:12.097 "num_base_bdevs": 2, 00:19:12.098 "num_base_bdevs_discovered": 2, 00:19:12.098 "num_base_bdevs_operational": 2, 00:19:12.098 "process": { 00:19:12.098 "type": "rebuild", 00:19:12.098 "target": "spare", 00:19:12.098 "progress": { 00:19:12.098 "blocks": 24576, 00:19:12.098 "percent": 37 00:19:12.098 } 00:19:12.098 }, 00:19:12.098 "base_bdevs_list": [ 00:19:12.098 { 00:19:12.098 "name": "spare", 00:19:12.098 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:12.098 "is_configured": true, 00:19:12.098 "data_offset": 0, 00:19:12.098 "data_size": 65536 00:19:12.098 }, 00:19:12.098 { 00:19:12.098 "name": "BaseBdev2", 00:19:12.098 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:12.098 "is_configured": true, 00:19:12.098 "data_offset": 0, 00:19:12.098 "data_size": 65536 00:19:12.098 } 00:19:12.098 ] 00:19:12.098 }' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@657 -- # local timeout=349 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.098 21:41:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:12.357 "name": "raid_bdev1", 00:19:12.357 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:12.357 "strip_size_kb": 0, 00:19:12.357 "state": "online", 00:19:12.357 "raid_level": "raid1", 00:19:12.357 "superblock": false, 00:19:12.357 "num_base_bdevs": 2, 00:19:12.357 "num_base_bdevs_discovered": 2, 00:19:12.357 "num_base_bdevs_operational": 2, 00:19:12.357 "process": { 00:19:12.357 "type": "rebuild", 00:19:12.357 "target": "spare", 00:19:12.357 "progress": { 00:19:12.357 "blocks": 28672, 00:19:12.357 "percent": 43 00:19:12.357 } 00:19:12.357 }, 00:19:12.357 "base_bdevs_list": [ 00:19:12.357 { 00:19:12.357 "name": "spare", 00:19:12.357 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:12.357 "is_configured": true, 00:19:12.357 "data_offset": 0, 00:19:12.357 "data_size": 65536 00:19:12.357 }, 00:19:12.357 { 00:19:12.357 "name": "BaseBdev2", 00:19:12.357 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:12.357 "is_configured": true, 00:19:12.357 "data_offset": 0, 00:19:12.357 "data_size": 65536 00:19:12.357 } 00:19:12.357 ] 00:19:12.357 }' 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:12.357 21:41:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:13.294 21:41:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:13.295 21:41:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.295 21:41:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:13.554 "name": "raid_bdev1", 00:19:13.554 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:13.554 "strip_size_kb": 0, 00:19:13.554 "state": "online", 00:19:13.554 "raid_level": "raid1", 00:19:13.554 "superblock": false, 00:19:13.554 "num_base_bdevs": 2, 00:19:13.554 "num_base_bdevs_discovered": 2, 00:19:13.554 "num_base_bdevs_operational": 2, 00:19:13.554 "process": { 00:19:13.554 "type": "rebuild", 00:19:13.554 "target": "spare", 00:19:13.554 "progress": { 00:19:13.554 "blocks": 55296, 00:19:13.554 "percent": 84 00:19:13.554 } 00:19:13.554 }, 00:19:13.554 "base_bdevs_list": [ 00:19:13.554 { 00:19:13.554 "name": "spare", 00:19:13.554 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:13.554 "is_configured": true, 00:19:13.554 "data_offset": 0, 00:19:13.554 "data_size": 65536 00:19:13.554 }, 00:19:13.554 { 00:19:13.554 "name": "BaseBdev2", 00:19:13.554 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:13.554 "is_configured": true, 00:19:13.554 "data_offset": 0, 00:19:13.554 "data_size": 65536 00:19:13.554 } 00:19:13.554 ] 00:19:13.554 }' 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.554 21:41:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:14.122 [2024-12-06 21:41:34.381703] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:14.122 [2024-12-06 21:41:34.381816] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:14.122 [2024-12-06 21:41:34.381889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.690 21:41:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:14.949 "name": "raid_bdev1", 00:19:14.949 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:14.949 "strip_size_kb": 0, 00:19:14.949 "state": "online", 00:19:14.949 "raid_level": "raid1", 00:19:14.949 "superblock": false, 00:19:14.949 "num_base_bdevs": 2, 00:19:14.949 "num_base_bdevs_discovered": 2, 00:19:14.949 "num_base_bdevs_operational": 2, 00:19:14.949 "base_bdevs_list": [ 00:19:14.949 { 00:19:14.949 "name": "spare", 00:19:14.949 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:14.949 "is_configured": true, 00:19:14.949 "data_offset": 0, 00:19:14.949 "data_size": 65536 00:19:14.949 }, 00:19:14.949 { 00:19:14.949 "name": "BaseBdev2", 00:19:14.949 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:14.949 "is_configured": true, 00:19:14.949 "data_offset": 0, 00:19:14.949 "data_size": 65536 00:19:14.949 } 00:19:14.949 ] 00:19:14.949 }' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@660 -- # break 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:14.949 "name": "raid_bdev1", 00:19:14.949 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:14.949 "strip_size_kb": 0, 00:19:14.949 "state": "online", 00:19:14.949 "raid_level": "raid1", 00:19:14.949 "superblock": false, 00:19:14.949 "num_base_bdevs": 2, 00:19:14.949 "num_base_bdevs_discovered": 2, 00:19:14.949 "num_base_bdevs_operational": 2, 00:19:14.949 "base_bdevs_list": [ 00:19:14.949 { 00:19:14.949 "name": "spare", 00:19:14.949 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:14.949 "is_configured": true, 00:19:14.949 "data_offset": 0, 00:19:14.949 "data_size": 65536 00:19:14.949 }, 00:19:14.949 { 00:19:14.949 "name": "BaseBdev2", 00:19:14.949 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:14.949 "is_configured": true, 00:19:14.949 "data_offset": 0, 00:19:14.949 "data_size": 65536 00:19:14.949 } 00:19:14.949 ] 00:19:14.949 }' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.949 21:41:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.208 21:41:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:15.208 "name": "raid_bdev1", 00:19:15.208 "uuid": "b2579854-ff72-4b1a-92b9-b52609e9f8f6", 00:19:15.208 "strip_size_kb": 0, 00:19:15.208 "state": "online", 00:19:15.208 "raid_level": "raid1", 00:19:15.208 "superblock": false, 00:19:15.208 "num_base_bdevs": 2, 00:19:15.208 "num_base_bdevs_discovered": 2, 00:19:15.208 "num_base_bdevs_operational": 2, 00:19:15.208 "base_bdevs_list": [ 00:19:15.208 { 00:19:15.208 "name": "spare", 00:19:15.208 "uuid": "61df8573-abbb-5d49-a115-ab24c6e0603f", 00:19:15.208 "is_configured": true, 00:19:15.208 "data_offset": 0, 00:19:15.208 "data_size": 65536 00:19:15.208 }, 00:19:15.208 { 00:19:15.208 "name": "BaseBdev2", 00:19:15.208 "uuid": "fff25757-416e-4bbe-b286-b43191ba3bb3", 00:19:15.208 "is_configured": true, 00:19:15.208 "data_offset": 0, 00:19:15.208 "data_size": 65536 00:19:15.208 } 00:19:15.208 ] 00:19:15.208 }' 00:19:15.208 21:41:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:15.208 21:41:35 -- common/autotest_common.sh@10 -- # set +x 00:19:15.466 21:41:35 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:15.724 [2024-12-06 21:41:36.153449] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:15.724 [2024-12-06 21:41:36.153507] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.724 [2024-12-06 21:41:36.153592] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.724 [2024-12-06 21:41:36.153688] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.724 [2024-12-06 21:41:36.153720] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:19:15.724 21:41:36 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.724 21:41:36 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:15.983 21:41:36 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:15.983 21:41:36 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:15.983 21:41:36 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@12 -- # local i 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:15.983 21:41:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:16.242 /dev/nbd0 00:19:16.242 21:41:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:16.242 21:41:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:16.242 21:41:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:16.242 21:41:36 -- common/autotest_common.sh@867 -- # local i 00:19:16.242 21:41:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:16.242 21:41:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:16.242 21:41:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:16.242 21:41:36 -- common/autotest_common.sh@871 -- # break 00:19:16.242 21:41:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:16.242 21:41:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:16.242 21:41:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.242 1+0 records in 00:19:16.242 1+0 records out 00:19:16.242 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281553 s, 14.5 MB/s 00:19:16.242 21:41:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.242 21:41:36 -- common/autotest_common.sh@884 -- # size=4096 00:19:16.242 21:41:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.242 21:41:36 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:16.242 21:41:36 -- common/autotest_common.sh@887 -- # return 0 00:19:16.242 21:41:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.242 21:41:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.242 21:41:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:16.500 /dev/nbd1 00:19:16.500 21:41:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:16.500 21:41:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:16.500 21:41:36 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:16.500 21:41:36 -- common/autotest_common.sh@867 -- # local i 00:19:16.500 21:41:36 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:16.500 21:41:36 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:16.500 21:41:36 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:16.500 21:41:36 -- common/autotest_common.sh@871 -- # break 00:19:16.500 21:41:36 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:16.500 21:41:36 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:16.500 21:41:36 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:16.500 1+0 records in 00:19:16.500 1+0 records out 00:19:16.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410426 s, 10.0 MB/s 00:19:16.500 21:41:36 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.763 21:41:36 -- common/autotest_common.sh@884 -- # size=4096 00:19:16.763 21:41:36 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:16.763 21:41:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:16.763 21:41:37 -- common/autotest_common.sh@887 -- # return 0 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:16.763 21:41:37 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:16.763 21:41:37 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@51 -- # local i 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:16.763 21:41:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@41 -- # break 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:17.022 21:41:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:17.280 21:41:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@41 -- # break 00:19:17.281 21:41:37 -- bdev/nbd_common.sh@45 -- # return 0 00:19:17.281 21:41:37 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:17.281 21:41:37 -- bdev/bdev_raid.sh@709 -- # killprocess 77985 00:19:17.281 21:41:37 -- common/autotest_common.sh@936 -- # '[' -z 77985 ']' 00:19:17.281 21:41:37 -- common/autotest_common.sh@940 -- # kill -0 77985 00:19:17.281 21:41:37 -- common/autotest_common.sh@941 -- # uname 00:19:17.281 21:41:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.281 21:41:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77985 00:19:17.281 killing process with pid 77985 00:19:17.281 Received shutdown signal, test time was about 60.000000 seconds 00:19:17.281 00:19:17.281 Latency(us) 00:19:17.281 [2024-12-06T21:41:37.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.281 [2024-12-06T21:41:37.778Z] =================================================================================================================== 00:19:17.281 [2024-12-06T21:41:37.778Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.281 21:41:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:17.281 21:41:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:17.281 21:41:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77985' 00:19:17.281 21:41:37 -- common/autotest_common.sh@955 -- # kill 77985 00:19:17.281 [2024-12-06 21:41:37.733965] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:17.281 21:41:37 -- common/autotest_common.sh@960 -- # wait 77985 00:19:17.540 [2024-12-06 21:41:37.965048] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:18.915 00:19:18.915 real 0m20.776s 00:19:18.915 user 0m26.350s 00:19:18.915 sys 0m3.925s 00:19:18.915 21:41:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:18.915 ************************************ 00:19:18.915 END TEST raid_rebuild_test 00:19:18.915 21:41:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.915 ************************************ 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:19:18.915 21:41:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:18.915 21:41:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:18.915 21:41:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.915 ************************************ 00:19:18.915 START TEST raid_rebuild_test_sb 00:19:18.915 ************************************ 00:19:18.915 21:41:39 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true false 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@544 -- # raid_pid=78486 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@545 -- # waitforlisten 78486 /var/tmp/spdk-raid.sock 00:19:18.915 21:41:39 -- common/autotest_common.sh@829 -- # '[' -z 78486 ']' 00:19:18.915 21:41:39 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:18.915 21:41:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:18.915 21:41:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:18.915 21:41:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:18.915 21:41:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.915 21:41:39 -- common/autotest_common.sh@10 -- # set +x 00:19:18.915 [2024-12-06 21:41:39.168485] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:18.915 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:18.915 Zero copy mechanism will not be used. 00:19:18.915 [2024-12-06 21:41:39.168722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78486 ] 00:19:18.915 [2024-12-06 21:41:39.334982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.173 [2024-12-06 21:41:39.519697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.432 [2024-12-06 21:41:39.690934] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:19.691 21:41:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.691 21:41:40 -- common/autotest_common.sh@862 -- # return 0 00:19:19.691 21:41:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:19.691 21:41:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:19.691 21:41:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:19.950 BaseBdev1_malloc 00:19:19.950 21:41:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:20.209 [2024-12-06 21:41:40.537460] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:20.209 [2024-12-06 21:41:40.537629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.209 [2024-12-06 21:41:40.537685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:19:20.209 [2024-12-06 21:41:40.537702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.209 [2024-12-06 21:41:40.540241] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.209 [2024-12-06 21:41:40.540292] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:20.209 BaseBdev1 00:19:20.209 21:41:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:20.209 21:41:40 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:19:20.209 21:41:40 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:20.467 BaseBdev2_malloc 00:19:20.467 21:41:40 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:20.724 [2024-12-06 21:41:41.035440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:20.724 [2024-12-06 21:41:41.035558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.724 [2024-12-06 21:41:41.035600] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:19:20.724 [2024-12-06 21:41:41.035619] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.724 [2024-12-06 21:41:41.037970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.724 [2024-12-06 21:41:41.038044] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:20.724 BaseBdev2 00:19:20.724 21:41:41 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:20.981 spare_malloc 00:19:20.981 21:41:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:21.239 spare_delay 00:19:21.240 21:41:41 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:21.504 [2024-12-06 21:41:41.781351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:21.504 [2024-12-06 21:41:41.781479] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:21.504 [2024-12-06 21:41:41.781515] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:19:21.504 [2024-12-06 21:41:41.781534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:21.504 [2024-12-06 21:41:41.784221] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:21.504 [2024-12-06 21:41:41.784272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:21.504 spare 00:19:21.504 21:41:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:21.504 [2024-12-06 21:41:41.997544] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:21.504 [2024-12-06 21:41:41.999705] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:21.504 [2024-12-06 21:41:41.999987] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:19:21.505 [2024-12-06 21:41:42.000013] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:21.505 [2024-12-06 21:41:42.000155] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:21.505 [2024-12-06 21:41:42.000615] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:19:21.505 [2024-12-06 21:41:42.000644] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:19:21.505 [2024-12-06 21:41:42.000820] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.763 21:41:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:22.023 21:41:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:22.023 "name": "raid_bdev1", 00:19:22.023 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:22.023 "strip_size_kb": 0, 00:19:22.023 "state": "online", 00:19:22.023 "raid_level": "raid1", 00:19:22.023 "superblock": true, 00:19:22.023 "num_base_bdevs": 2, 00:19:22.023 "num_base_bdevs_discovered": 2, 00:19:22.023 "num_base_bdevs_operational": 2, 00:19:22.023 "base_bdevs_list": [ 00:19:22.023 { 00:19:22.023 "name": "BaseBdev1", 00:19:22.023 "uuid": "eadadb8c-8373-54d2-a87b-8524f348cf55", 00:19:22.023 "is_configured": true, 00:19:22.023 "data_offset": 2048, 00:19:22.023 "data_size": 63488 00:19:22.023 }, 00:19:22.023 { 00:19:22.023 "name": "BaseBdev2", 00:19:22.023 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:22.023 "is_configured": true, 00:19:22.023 "data_offset": 2048, 00:19:22.023 "data_size": 63488 00:19:22.023 } 00:19:22.023 ] 00:19:22.023 }' 00:19:22.023 21:41:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:22.023 21:41:42 -- common/autotest_common.sh@10 -- # set +x 00:19:22.282 21:41:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:22.282 21:41:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:22.541 [2024-12-06 21:41:42.822028] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:22.541 21:41:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:19:22.541 21:41:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:22.541 21:41:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:22.799 21:41:43 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:19:22.799 21:41:43 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:22.799 21:41:43 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:22.799 21:41:43 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@12 -- # local i 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:22.799 21:41:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:23.057 [2024-12-06 21:41:43.346058] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:23.057 /dev/nbd0 00:19:23.057 21:41:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:23.057 21:41:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:23.057 21:41:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:23.057 21:41:43 -- common/autotest_common.sh@867 -- # local i 00:19:23.057 21:41:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:23.057 21:41:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:23.057 21:41:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:23.057 21:41:43 -- common/autotest_common.sh@871 -- # break 00:19:23.057 21:41:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:23.057 21:41:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:23.057 21:41:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.057 1+0 records in 00:19:23.057 1+0 records out 00:19:23.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206104 s, 19.9 MB/s 00:19:23.057 21:41:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.057 21:41:43 -- common/autotest_common.sh@884 -- # size=4096 00:19:23.057 21:41:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.057 21:41:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:23.057 21:41:43 -- common/autotest_common.sh@887 -- # return 0 00:19:23.057 21:41:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:23.057 21:41:43 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:23.057 21:41:43 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:23.057 21:41:43 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:23.057 21:41:43 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:19:29.614 63488+0 records in 00:19:29.614 63488+0 records out 00:19:29.614 32505856 bytes (33 MB, 31 MiB) copied, 6.02533 s, 5.4 MB/s 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@51 -- # local i 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:29.614 [2024-12-06 21:41:49.641815] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@41 -- # break 00:19:29.614 21:41:49 -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:29.614 [2024-12-06 21:41:49.857411] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.614 21:41:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.614 21:41:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:29.614 "name": "raid_bdev1", 00:19:29.614 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:29.614 "strip_size_kb": 0, 00:19:29.614 "state": "online", 00:19:29.614 "raid_level": "raid1", 00:19:29.614 "superblock": true, 00:19:29.614 "num_base_bdevs": 2, 00:19:29.614 "num_base_bdevs_discovered": 1, 00:19:29.614 "num_base_bdevs_operational": 1, 00:19:29.614 "base_bdevs_list": [ 00:19:29.614 { 00:19:29.614 "name": null, 00:19:29.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:29.614 "is_configured": false, 00:19:29.614 "data_offset": 2048, 00:19:29.614 "data_size": 63488 00:19:29.614 }, 00:19:29.614 { 00:19:29.614 "name": "BaseBdev2", 00:19:29.614 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:29.614 "is_configured": true, 00:19:29.614 "data_offset": 2048, 00:19:29.614 "data_size": 63488 00:19:29.614 } 00:19:29.614 ] 00:19:29.614 }' 00:19:29.614 21:41:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:29.614 21:41:50 -- common/autotest_common.sh@10 -- # set +x 00:19:30.182 21:41:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:30.182 [2024-12-06 21:41:50.637698] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:30.182 [2024-12-06 21:41:50.637760] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.182 [2024-12-06 21:41:50.652320] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2c10 00:19:30.182 [2024-12-06 21:41:50.654602] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:30.182 21:41:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:31.567 "name": "raid_bdev1", 00:19:31.567 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:31.567 "strip_size_kb": 0, 00:19:31.567 "state": "online", 00:19:31.567 "raid_level": "raid1", 00:19:31.567 "superblock": true, 00:19:31.567 "num_base_bdevs": 2, 00:19:31.567 "num_base_bdevs_discovered": 2, 00:19:31.567 "num_base_bdevs_operational": 2, 00:19:31.567 "process": { 00:19:31.567 "type": "rebuild", 00:19:31.567 "target": "spare", 00:19:31.567 "progress": { 00:19:31.567 "blocks": 24576, 00:19:31.567 "percent": 38 00:19:31.567 } 00:19:31.567 }, 00:19:31.567 "base_bdevs_list": [ 00:19:31.567 { 00:19:31.567 "name": "spare", 00:19:31.567 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:31.567 "is_configured": true, 00:19:31.567 "data_offset": 2048, 00:19:31.567 "data_size": 63488 00:19:31.567 }, 00:19:31.567 { 00:19:31.567 "name": "BaseBdev2", 00:19:31.567 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:31.567 "is_configured": true, 00:19:31.567 "data_offset": 2048, 00:19:31.567 "data_size": 63488 00:19:31.567 } 00:19:31.567 ] 00:19:31.567 }' 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:31.567 21:41:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:31.568 21:41:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.568 21:41:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:31.826 [2024-12-06 21:41:52.132594] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.826 [2024-12-06 21:41:52.162316] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:31.826 [2024-12-06 21:41:52.162417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.826 21:41:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.086 21:41:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.086 "name": "raid_bdev1", 00:19:32.086 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:32.086 "strip_size_kb": 0, 00:19:32.086 "state": "online", 00:19:32.086 "raid_level": "raid1", 00:19:32.086 "superblock": true, 00:19:32.086 "num_base_bdevs": 2, 00:19:32.086 "num_base_bdevs_discovered": 1, 00:19:32.086 "num_base_bdevs_operational": 1, 00:19:32.086 "base_bdevs_list": [ 00:19:32.086 { 00:19:32.086 "name": null, 00:19:32.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.086 "is_configured": false, 00:19:32.086 "data_offset": 2048, 00:19:32.086 "data_size": 63488 00:19:32.086 }, 00:19:32.086 { 00:19:32.086 "name": "BaseBdev2", 00:19:32.086 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:32.086 "is_configured": true, 00:19:32.086 "data_offset": 2048, 00:19:32.086 "data_size": 63488 00:19:32.086 } 00:19:32.086 ] 00:19:32.086 }' 00:19:32.086 21:41:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.086 21:41:52 -- common/autotest_common.sh@10 -- # set +x 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.346 21:41:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.606 21:41:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:32.606 "name": "raid_bdev1", 00:19:32.606 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:32.606 "strip_size_kb": 0, 00:19:32.606 "state": "online", 00:19:32.606 "raid_level": "raid1", 00:19:32.606 "superblock": true, 00:19:32.606 "num_base_bdevs": 2, 00:19:32.606 "num_base_bdevs_discovered": 1, 00:19:32.606 "num_base_bdevs_operational": 1, 00:19:32.606 "base_bdevs_list": [ 00:19:32.606 { 00:19:32.606 "name": null, 00:19:32.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.606 "is_configured": false, 00:19:32.606 "data_offset": 2048, 00:19:32.606 "data_size": 63488 00:19:32.606 }, 00:19:32.606 { 00:19:32.606 "name": "BaseBdev2", 00:19:32.606 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:32.606 "is_configured": true, 00:19:32.606 "data_offset": 2048, 00:19:32.606 "data_size": 63488 00:19:32.606 } 00:19:32.606 ] 00:19:32.606 }' 00:19:32.606 21:41:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:32.606 21:41:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:32.606 21:41:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:32.866 21:41:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:32.866 21:41:53 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:32.866 [2024-12-06 21:41:53.351536] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:32.866 [2024-12-06 21:41:53.351615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.126 [2024-12-06 21:41:53.366223] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2ce0 00:19:33.126 [2024-12-06 21:41:53.368400] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:33.126 21:41:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.064 21:41:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:34.324 "name": "raid_bdev1", 00:19:34.324 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:34.324 "strip_size_kb": 0, 00:19:34.324 "state": "online", 00:19:34.324 "raid_level": "raid1", 00:19:34.324 "superblock": true, 00:19:34.324 "num_base_bdevs": 2, 00:19:34.324 "num_base_bdevs_discovered": 2, 00:19:34.324 "num_base_bdevs_operational": 2, 00:19:34.324 "process": { 00:19:34.324 "type": "rebuild", 00:19:34.324 "target": "spare", 00:19:34.324 "progress": { 00:19:34.324 "blocks": 24576, 00:19:34.324 "percent": 38 00:19:34.324 } 00:19:34.324 }, 00:19:34.324 "base_bdevs_list": [ 00:19:34.324 { 00:19:34.324 "name": "spare", 00:19:34.324 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:34.324 "is_configured": true, 00:19:34.324 "data_offset": 2048, 00:19:34.324 "data_size": 63488 00:19:34.324 }, 00:19:34.324 { 00:19:34.324 "name": "BaseBdev2", 00:19:34.324 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:34.324 "is_configured": true, 00:19:34.324 "data_offset": 2048, 00:19:34.324 "data_size": 63488 00:19:34.324 } 00:19:34.324 ] 00:19:34.324 }' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:19:34.324 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@657 -- # local timeout=371 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.324 21:41:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:34.584 "name": "raid_bdev1", 00:19:34.584 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:34.584 "strip_size_kb": 0, 00:19:34.584 "state": "online", 00:19:34.584 "raid_level": "raid1", 00:19:34.584 "superblock": true, 00:19:34.584 "num_base_bdevs": 2, 00:19:34.584 "num_base_bdevs_discovered": 2, 00:19:34.584 "num_base_bdevs_operational": 2, 00:19:34.584 "process": { 00:19:34.584 "type": "rebuild", 00:19:34.584 "target": "spare", 00:19:34.584 "progress": { 00:19:34.584 "blocks": 30720, 00:19:34.584 "percent": 48 00:19:34.584 } 00:19:34.584 }, 00:19:34.584 "base_bdevs_list": [ 00:19:34.584 { 00:19:34.584 "name": "spare", 00:19:34.584 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:34.584 "is_configured": true, 00:19:34.584 "data_offset": 2048, 00:19:34.584 "data_size": 63488 00:19:34.584 }, 00:19:34.584 { 00:19:34.584 "name": "BaseBdev2", 00:19:34.584 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:34.584 "is_configured": true, 00:19:34.584 "data_offset": 2048, 00:19:34.584 "data_size": 63488 00:19:34.584 } 00:19:34.584 ] 00:19:34.584 }' 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.584 21:41:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:35.519 21:41:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:35.778 "name": "raid_bdev1", 00:19:35.778 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:35.778 "strip_size_kb": 0, 00:19:35.778 "state": "online", 00:19:35.778 "raid_level": "raid1", 00:19:35.778 "superblock": true, 00:19:35.778 "num_base_bdevs": 2, 00:19:35.778 "num_base_bdevs_discovered": 2, 00:19:35.778 "num_base_bdevs_operational": 2, 00:19:35.778 "process": { 00:19:35.778 "type": "rebuild", 00:19:35.778 "target": "spare", 00:19:35.778 "progress": { 00:19:35.778 "blocks": 57344, 00:19:35.778 "percent": 90 00:19:35.778 } 00:19:35.778 }, 00:19:35.778 "base_bdevs_list": [ 00:19:35.778 { 00:19:35.778 "name": "spare", 00:19:35.778 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:35.778 "is_configured": true, 00:19:35.778 "data_offset": 2048, 00:19:35.778 "data_size": 63488 00:19:35.778 }, 00:19:35.778 { 00:19:35.778 "name": "BaseBdev2", 00:19:35.778 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:35.778 "is_configured": true, 00:19:35.778 "data_offset": 2048, 00:19:35.778 "data_size": 63488 00:19:35.778 } 00:19:35.778 ] 00:19:35.778 }' 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:35.778 21:41:56 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:36.038 [2024-12-06 21:41:56.485177] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:36.038 [2024-12-06 21:41:56.485259] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:36.038 [2024-12-06 21:41:56.485447] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.974 21:41:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:37.233 "name": "raid_bdev1", 00:19:37.233 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:37.233 "strip_size_kb": 0, 00:19:37.233 "state": "online", 00:19:37.233 "raid_level": "raid1", 00:19:37.233 "superblock": true, 00:19:37.233 "num_base_bdevs": 2, 00:19:37.233 "num_base_bdevs_discovered": 2, 00:19:37.233 "num_base_bdevs_operational": 2, 00:19:37.233 "base_bdevs_list": [ 00:19:37.233 { 00:19:37.233 "name": "spare", 00:19:37.233 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:37.233 "is_configured": true, 00:19:37.233 "data_offset": 2048, 00:19:37.233 "data_size": 63488 00:19:37.233 }, 00:19:37.233 { 00:19:37.233 "name": "BaseBdev2", 00:19:37.233 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:37.233 "is_configured": true, 00:19:37.233 "data_offset": 2048, 00:19:37.233 "data_size": 63488 00:19:37.233 } 00:19:37.233 ] 00:19:37.233 }' 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@660 -- # break 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.233 21:41:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:37.492 "name": "raid_bdev1", 00:19:37.492 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:37.492 "strip_size_kb": 0, 00:19:37.492 "state": "online", 00:19:37.492 "raid_level": "raid1", 00:19:37.492 "superblock": true, 00:19:37.492 "num_base_bdevs": 2, 00:19:37.492 "num_base_bdevs_discovered": 2, 00:19:37.492 "num_base_bdevs_operational": 2, 00:19:37.492 "base_bdevs_list": [ 00:19:37.492 { 00:19:37.492 "name": "spare", 00:19:37.492 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:37.492 "is_configured": true, 00:19:37.492 "data_offset": 2048, 00:19:37.492 "data_size": 63488 00:19:37.492 }, 00:19:37.492 { 00:19:37.492 "name": "BaseBdev2", 00:19:37.492 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:37.492 "is_configured": true, 00:19:37.492 "data_offset": 2048, 00:19:37.492 "data_size": 63488 00:19:37.492 } 00:19:37.492 ] 00:19:37.492 }' 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.492 21:41:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.751 21:41:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.751 "name": "raid_bdev1", 00:19:37.751 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:37.751 "strip_size_kb": 0, 00:19:37.751 "state": "online", 00:19:37.751 "raid_level": "raid1", 00:19:37.751 "superblock": true, 00:19:37.751 "num_base_bdevs": 2, 00:19:37.751 "num_base_bdevs_discovered": 2, 00:19:37.751 "num_base_bdevs_operational": 2, 00:19:37.751 "base_bdevs_list": [ 00:19:37.751 { 00:19:37.751 "name": "spare", 00:19:37.751 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:37.751 "is_configured": true, 00:19:37.751 "data_offset": 2048, 00:19:37.751 "data_size": 63488 00:19:37.751 }, 00:19:37.751 { 00:19:37.751 "name": "BaseBdev2", 00:19:37.751 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:37.751 "is_configured": true, 00:19:37.751 "data_offset": 2048, 00:19:37.751 "data_size": 63488 00:19:37.751 } 00:19:37.751 ] 00:19:37.751 }' 00:19:37.751 21:41:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.751 21:41:58 -- common/autotest_common.sh@10 -- # set +x 00:19:38.010 21:41:58 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:38.270 [2024-12-06 21:41:58.749997] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:38.270 [2024-12-06 21:41:58.750042] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.270 [2024-12-06 21:41:58.750134] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.270 [2024-12-06 21:41:58.750230] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.270 [2024-12-06 21:41:58.750249] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:19:38.529 21:41:58 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:38.529 21:41:58 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:38.801 21:41:59 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:38.801 21:41:59 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:38.801 21:41:59 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@12 -- # local i 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:38.801 21:41:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:39.096 /dev/nbd0 00:19:39.096 21:41:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:39.096 21:41:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:39.096 21:41:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:39.096 21:41:59 -- common/autotest_common.sh@867 -- # local i 00:19:39.096 21:41:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:39.096 21:41:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:39.096 21:41:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:39.096 21:41:59 -- common/autotest_common.sh@871 -- # break 00:19:39.096 21:41:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:39.096 21:41:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:39.096 21:41:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.096 1+0 records in 00:19:39.096 1+0 records out 00:19:39.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206515 s, 19.8 MB/s 00:19:39.096 21:41:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.096 21:41:59 -- common/autotest_common.sh@884 -- # size=4096 00:19:39.096 21:41:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.096 21:41:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:39.096 21:41:59 -- common/autotest_common.sh@887 -- # return 0 00:19:39.096 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.096 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.096 21:41:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:19:39.371 /dev/nbd1 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:39.371 21:41:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:39.371 21:41:59 -- common/autotest_common.sh@867 -- # local i 00:19:39.371 21:41:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:39.371 21:41:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:39.371 21:41:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:39.371 21:41:59 -- common/autotest_common.sh@871 -- # break 00:19:39.371 21:41:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:39.371 21:41:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:39.371 21:41:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:39.371 1+0 records in 00:19:39.371 1+0 records out 00:19:39.371 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303742 s, 13.5 MB/s 00:19:39.371 21:41:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.371 21:41:59 -- common/autotest_common.sh@884 -- # size=4096 00:19:39.371 21:41:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:39.371 21:41:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:39.371 21:41:59 -- common/autotest_common.sh@887 -- # return 0 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:39.371 21:41:59 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:39.371 21:41:59 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@51 -- # local i 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.371 21:41:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:39.938 21:42:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@41 -- # break 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@45 -- # return 0 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@41 -- # break 00:19:39.939 21:42:00 -- bdev/nbd_common.sh@45 -- # return 0 00:19:39.939 21:42:00 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:19:39.939 21:42:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:39.939 21:42:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:19:39.939 21:42:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:19:40.197 21:42:00 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:40.456 [2024-12-06 21:42:00.915131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:40.456 [2024-12-06 21:42:00.915240] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.456 [2024-12-06 21:42:00.915279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:19:40.456 [2024-12-06 21:42:00.915295] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.456 [2024-12-06 21:42:00.918001] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.456 [2024-12-06 21:42:00.918047] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:40.456 [2024-12-06 21:42:00.918190] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:40.456 [2024-12-06 21:42:00.918278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:40.456 BaseBdev1 00:19:40.456 21:42:00 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:19:40.456 21:42:00 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:19:40.456 21:42:00 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:19:40.714 21:42:01 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:40.971 [2024-12-06 21:42:01.419554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:40.971 [2024-12-06 21:42:01.419682] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.971 [2024-12-06 21:42:01.419719] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:19:40.971 [2024-12-06 21:42:01.419750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.971 [2024-12-06 21:42:01.420291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.971 [2024-12-06 21:42:01.420334] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:40.971 [2024-12-06 21:42:01.420459] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:19:40.971 [2024-12-06 21:42:01.420479] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:19:40.971 [2024-12-06 21:42:01.420493] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:40.971 [2024-12-06 21:42:01.420517] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a280 name raid_bdev1, state configuring 00:19:40.971 [2024-12-06 21:42:01.420595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:40.971 BaseBdev2 00:19:40.971 21:42:01 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:19:41.229 21:42:01 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:41.487 [2024-12-06 21:42:01.919747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:41.487 [2024-12-06 21:42:01.919860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.487 [2024-12-06 21:42:01.919896] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:19:41.487 [2024-12-06 21:42:01.919914] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.487 [2024-12-06 21:42:01.920476] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.487 [2024-12-06 21:42:01.920510] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:41.487 [2024-12-06 21:42:01.920624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:19:41.487 [2024-12-06 21:42:01.920661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:41.487 spare 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:41.487 21:42:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.746 [2024-12-06 21:42:02.020789] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a880 00:19:41.746 [2024-12-06 21:42:02.020857] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:41.746 [2024-12-06 21:42:02.020994] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1390 00:19:41.746 [2024-12-06 21:42:02.021523] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a880 00:19:41.746 [2024-12-06 21:42:02.021569] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a880 00:19:41.746 [2024-12-06 21:42:02.021741] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:41.746 21:42:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:41.746 "name": "raid_bdev1", 00:19:41.746 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:41.746 "strip_size_kb": 0, 00:19:41.746 "state": "online", 00:19:41.746 "raid_level": "raid1", 00:19:41.746 "superblock": true, 00:19:41.746 "num_base_bdevs": 2, 00:19:41.746 "num_base_bdevs_discovered": 2, 00:19:41.746 "num_base_bdevs_operational": 2, 00:19:41.746 "base_bdevs_list": [ 00:19:41.746 { 00:19:41.746 "name": "spare", 00:19:41.746 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:41.746 "is_configured": true, 00:19:41.746 "data_offset": 2048, 00:19:41.746 "data_size": 63488 00:19:41.746 }, 00:19:41.746 { 00:19:41.746 "name": "BaseBdev2", 00:19:41.746 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:41.746 "is_configured": true, 00:19:41.746 "data_offset": 2048, 00:19:41.746 "data_size": 63488 00:19:41.746 } 00:19:41.746 ] 00:19:41.746 }' 00:19:41.746 21:42:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:41.746 21:42:02 -- common/autotest_common.sh@10 -- # set +x 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.312 21:42:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:42.570 "name": "raid_bdev1", 00:19:42.570 "uuid": "35d165ae-82dc-41b4-8fdc-0d4fa7b65854", 00:19:42.570 "strip_size_kb": 0, 00:19:42.570 "state": "online", 00:19:42.570 "raid_level": "raid1", 00:19:42.570 "superblock": true, 00:19:42.570 "num_base_bdevs": 2, 00:19:42.570 "num_base_bdevs_discovered": 2, 00:19:42.570 "num_base_bdevs_operational": 2, 00:19:42.570 "base_bdevs_list": [ 00:19:42.570 { 00:19:42.570 "name": "spare", 00:19:42.570 "uuid": "b3a8182a-1276-5427-bd35-767433e2b7ae", 00:19:42.570 "is_configured": true, 00:19:42.570 "data_offset": 2048, 00:19:42.570 "data_size": 63488 00:19:42.570 }, 00:19:42.570 { 00:19:42.570 "name": "BaseBdev2", 00:19:42.570 "uuid": "eb371e9a-1dfc-585b-aa83-d949f5658d6e", 00:19:42.570 "is_configured": true, 00:19:42.570 "data_offset": 2048, 00:19:42.570 "data_size": 63488 00:19:42.570 } 00:19:42.570 ] 00:19:42.570 }' 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.570 21:42:02 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:42.828 21:42:03 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:19:42.828 21:42:03 -- bdev/bdev_raid.sh@709 -- # killprocess 78486 00:19:42.828 21:42:03 -- common/autotest_common.sh@936 -- # '[' -z 78486 ']' 00:19:42.828 21:42:03 -- common/autotest_common.sh@940 -- # kill -0 78486 00:19:42.828 21:42:03 -- common/autotest_common.sh@941 -- # uname 00:19:42.828 21:42:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.828 21:42:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78486 00:19:42.828 killing process with pid 78486 00:19:42.828 Received shutdown signal, test time was about 60.000000 seconds 00:19:42.828 00:19:42.828 Latency(us) 00:19:42.828 [2024-12-06T21:42:03.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.828 [2024-12-06T21:42:03.325Z] =================================================================================================================== 00:19:42.828 [2024-12-06T21:42:03.325Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:42.828 21:42:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:42.828 21:42:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:42.828 21:42:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78486' 00:19:42.828 21:42:03 -- common/autotest_common.sh@955 -- # kill 78486 00:19:42.828 21:42:03 -- common/autotest_common.sh@960 -- # wait 78486 00:19:42.828 [2024-12-06 21:42:03.173783] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:42.828 [2024-12-06 21:42:03.173897] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:42.828 [2024-12-06 21:42:03.174013] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:42.828 [2024-12-06 21:42:03.174059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a880 name raid_bdev1, state offline 00:19:43.086 [2024-12-06 21:42:03.403021] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:44.021 ************************************ 00:19:44.021 END TEST raid_rebuild_test_sb 00:19:44.021 ************************************ 00:19:44.021 21:42:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:19:44.021 00:19:44.021 real 0m25.400s 00:19:44.021 user 0m34.262s 00:19:44.021 sys 0m4.605s 00:19:44.021 21:42:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:44.021 21:42:04 -- common/autotest_common.sh@10 -- # set +x 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:19:44.280 21:42:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:19:44.280 21:42:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:44.280 21:42:04 -- common/autotest_common.sh@10 -- # set +x 00:19:44.280 ************************************ 00:19:44.280 START TEST raid_rebuild_test_io 00:19:44.280 ************************************ 00:19:44.280 21:42:04 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 false true 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=79077 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79077 /var/tmp/spdk-raid.sock 00:19:44.280 21:42:04 -- common/autotest_common.sh@829 -- # '[' -z 79077 ']' 00:19:44.280 21:42:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:44.280 21:42:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:44.280 21:42:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:44.280 21:42:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:44.280 21:42:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.280 21:42:04 -- common/autotest_common.sh@10 -- # set +x 00:19:44.280 [2024-12-06 21:42:04.622750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:44.280 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:44.280 Zero copy mechanism will not be used. 00:19:44.280 [2024-12-06 21:42:04.622948] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79077 ] 00:19:44.539 [2024-12-06 21:42:04.791170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.539 [2024-12-06 21:42:04.977328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.797 [2024-12-06 21:42:05.154908] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:45.055 21:42:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.055 21:42:05 -- common/autotest_common.sh@862 -- # return 0 00:19:45.055 21:42:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:45.055 21:42:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:45.055 21:42:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:45.313 BaseBdev1 00:19:45.313 21:42:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:45.313 21:42:05 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:45.313 21:42:05 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:45.572 BaseBdev2 00:19:45.572 21:42:06 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:45.830 spare_malloc 00:19:45.830 21:42:06 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:46.094 spare_delay 00:19:46.094 21:42:06 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:46.364 [2024-12-06 21:42:06.784790] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:46.364 [2024-12-06 21:42:06.784919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:46.364 [2024-12-06 21:42:06.784969] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007b80 00:19:46.364 [2024-12-06 21:42:06.784986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:46.364 [2024-12-06 21:42:06.787630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:46.364 [2024-12-06 21:42:06.787689] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:46.364 spare 00:19:46.364 21:42:06 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:46.622 [2024-12-06 21:42:06.992859] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:46.622 [2024-12-06 21:42:06.994833] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:46.622 [2024-12-06 21:42:06.994946] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008180 00:19:46.622 [2024-12-06 21:42:06.994966] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:46.622 [2024-12-06 21:42:06.995088] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:19:46.622 [2024-12-06 21:42:06.995612] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008180 00:19:46.622 [2024-12-06 21:42:06.995653] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008180 00:19:46.622 [2024-12-06 21:42:06.995865] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:46.622 21:42:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:46.881 21:42:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:46.881 "name": "raid_bdev1", 00:19:46.881 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:46.881 "strip_size_kb": 0, 00:19:46.881 "state": "online", 00:19:46.881 "raid_level": "raid1", 00:19:46.881 "superblock": false, 00:19:46.881 "num_base_bdevs": 2, 00:19:46.881 "num_base_bdevs_discovered": 2, 00:19:46.881 "num_base_bdevs_operational": 2, 00:19:46.881 "base_bdevs_list": [ 00:19:46.881 { 00:19:46.881 "name": "BaseBdev1", 00:19:46.881 "uuid": "e826d45a-94bd-49dd-8a34-56aab1c3db13", 00:19:46.881 "is_configured": true, 00:19:46.881 "data_offset": 0, 00:19:46.881 "data_size": 65536 00:19:46.881 }, 00:19:46.881 { 00:19:46.881 "name": "BaseBdev2", 00:19:46.881 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:46.881 "is_configured": true, 00:19:46.881 "data_offset": 0, 00:19:46.881 "data_size": 65536 00:19:46.881 } 00:19:46.881 ] 00:19:46.881 }' 00:19:46.881 21:42:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:46.881 21:42:07 -- common/autotest_common.sh@10 -- # set +x 00:19:47.139 21:42:07 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:47.139 21:42:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:47.399 [2024-12-06 21:42:07.805394] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.399 21:42:07 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:47.399 21:42:07 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.399 21:42:07 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:47.658 21:42:08 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:47.658 21:42:08 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:19:47.658 21:42:08 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:47.658 21:42:08 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:19:47.917 [2024-12-06 21:42:08.211734] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:19:47.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:47.917 Zero copy mechanism will not be used. 00:19:47.917 Running I/O for 60 seconds... 00:19:47.917 [2024-12-06 21:42:08.269685] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:47.917 [2024-12-06 21:42:08.269898] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:47.917 21:42:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:48.175 21:42:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:48.175 "name": "raid_bdev1", 00:19:48.175 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:48.175 "strip_size_kb": 0, 00:19:48.175 "state": "online", 00:19:48.175 "raid_level": "raid1", 00:19:48.175 "superblock": false, 00:19:48.175 "num_base_bdevs": 2, 00:19:48.175 "num_base_bdevs_discovered": 1, 00:19:48.175 "num_base_bdevs_operational": 1, 00:19:48.175 "base_bdevs_list": [ 00:19:48.175 { 00:19:48.175 "name": null, 00:19:48.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:48.175 "is_configured": false, 00:19:48.175 "data_offset": 0, 00:19:48.175 "data_size": 65536 00:19:48.175 }, 00:19:48.175 { 00:19:48.175 "name": "BaseBdev2", 00:19:48.175 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:48.175 "is_configured": true, 00:19:48.175 "data_offset": 0, 00:19:48.175 "data_size": 65536 00:19:48.175 } 00:19:48.175 ] 00:19:48.175 }' 00:19:48.175 21:42:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:48.175 21:42:08 -- common/autotest_common.sh@10 -- # set +x 00:19:48.434 21:42:08 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:48.693 [2024-12-06 21:42:09.094016] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:48.693 [2024-12-06 21:42:09.094085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:48.693 [2024-12-06 21:42:09.150307] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:19:48.693 21:42:09 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:48.693 [2024-12-06 21:42:09.152757] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:48.952 [2024-12-06 21:42:09.267513] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.952 [2024-12-06 21:42:09.267937] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:48.952 [2024-12-06 21:42:09.389243] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:48.952 [2024-12-06 21:42:09.389444] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:49.521 [2024-12-06 21:42:09.873229] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:49.521 [2024-12-06 21:42:09.873487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.780 21:42:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.040 [2024-12-06 21:42:10.391714] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:50.040 "name": "raid_bdev1", 00:19:50.040 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:50.040 "strip_size_kb": 0, 00:19:50.040 "state": "online", 00:19:50.040 "raid_level": "raid1", 00:19:50.040 "superblock": false, 00:19:50.040 "num_base_bdevs": 2, 00:19:50.040 "num_base_bdevs_discovered": 2, 00:19:50.040 "num_base_bdevs_operational": 2, 00:19:50.040 "process": { 00:19:50.040 "type": "rebuild", 00:19:50.040 "target": "spare", 00:19:50.040 "progress": { 00:19:50.040 "blocks": 14336, 00:19:50.040 "percent": 21 00:19:50.040 } 00:19:50.040 }, 00:19:50.040 "base_bdevs_list": [ 00:19:50.040 { 00:19:50.040 "name": "spare", 00:19:50.040 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:50.040 "is_configured": true, 00:19:50.040 "data_offset": 0, 00:19:50.040 "data_size": 65536 00:19:50.040 }, 00:19:50.040 { 00:19:50.040 "name": "BaseBdev2", 00:19:50.040 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:50.040 "is_configured": true, 00:19:50.040 "data_offset": 0, 00:19:50.040 "data_size": 65536 00:19:50.040 } 00:19:50.040 ] 00:19:50.040 }' 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:50.040 21:42:10 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:50.298 [2024-12-06 21:42:10.658179] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:50.557 [2024-12-06 21:42:10.850870] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:50.557 [2024-12-06 21:42:10.859117] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:50.557 [2024-12-06 21:42:10.898195] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:50.557 21:42:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:50.817 21:42:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:50.817 "name": "raid_bdev1", 00:19:50.817 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:50.817 "strip_size_kb": 0, 00:19:50.817 "state": "online", 00:19:50.817 "raid_level": "raid1", 00:19:50.817 "superblock": false, 00:19:50.817 "num_base_bdevs": 2, 00:19:50.817 "num_base_bdevs_discovered": 1, 00:19:50.817 "num_base_bdevs_operational": 1, 00:19:50.817 "base_bdevs_list": [ 00:19:50.817 { 00:19:50.817 "name": null, 00:19:50.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:50.817 "is_configured": false, 00:19:50.817 "data_offset": 0, 00:19:50.817 "data_size": 65536 00:19:50.817 }, 00:19:50.817 { 00:19:50.817 "name": "BaseBdev2", 00:19:50.817 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:50.817 "is_configured": true, 00:19:50.817 "data_offset": 0, 00:19:50.817 "data_size": 65536 00:19:50.817 } 00:19:50.817 ] 00:19:50.817 }' 00:19:50.817 21:42:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:50.817 21:42:11 -- common/autotest_common.sh@10 -- # set +x 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.077 21:42:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:51.336 "name": "raid_bdev1", 00:19:51.336 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:51.336 "strip_size_kb": 0, 00:19:51.336 "state": "online", 00:19:51.336 "raid_level": "raid1", 00:19:51.336 "superblock": false, 00:19:51.336 "num_base_bdevs": 2, 00:19:51.336 "num_base_bdevs_discovered": 1, 00:19:51.336 "num_base_bdevs_operational": 1, 00:19:51.336 "base_bdevs_list": [ 00:19:51.336 { 00:19:51.336 "name": null, 00:19:51.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.336 "is_configured": false, 00:19:51.336 "data_offset": 0, 00:19:51.336 "data_size": 65536 00:19:51.336 }, 00:19:51.336 { 00:19:51.336 "name": "BaseBdev2", 00:19:51.336 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:51.336 "is_configured": true, 00:19:51.336 "data_offset": 0, 00:19:51.336 "data_size": 65536 00:19:51.336 } 00:19:51.336 ] 00:19:51.336 }' 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:51.336 21:42:11 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:51.596 [2024-12-06 21:42:11.867237] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:51.596 [2024-12-06 21:42:11.867302] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:51.596 21:42:11 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:51.596 [2024-12-06 21:42:11.930251] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:19:51.596 [2024-12-06 21:42:11.932216] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.596 [2024-12-06 21:42:12.059123] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:19:51.855 [2024-12-06 21:42:12.274144] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:51.855 [2024-12-06 21:42:12.274391] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:19:52.424 [2024-12-06 21:42:12.615226] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.424 [2024-12-06 21:42:12.622234] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.424 21:42:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.683 [2024-12-06 21:42:13.131450] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:52.683 [2024-12-06 21:42:13.131778] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:19:52.683 21:42:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.683 "name": "raid_bdev1", 00:19:52.683 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:52.683 "strip_size_kb": 0, 00:19:52.683 "state": "online", 00:19:52.683 "raid_level": "raid1", 00:19:52.683 "superblock": false, 00:19:52.683 "num_base_bdevs": 2, 00:19:52.683 "num_base_bdevs_discovered": 2, 00:19:52.683 "num_base_bdevs_operational": 2, 00:19:52.683 "process": { 00:19:52.683 "type": "rebuild", 00:19:52.683 "target": "spare", 00:19:52.683 "progress": { 00:19:52.683 "blocks": 14336, 00:19:52.683 "percent": 21 00:19:52.683 } 00:19:52.683 }, 00:19:52.683 "base_bdevs_list": [ 00:19:52.683 { 00:19:52.683 "name": "spare", 00:19:52.683 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:52.683 "is_configured": true, 00:19:52.683 "data_offset": 0, 00:19:52.683 "data_size": 65536 00:19:52.683 }, 00:19:52.683 { 00:19:52.683 "name": "BaseBdev2", 00:19:52.683 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:52.683 "is_configured": true, 00:19:52.683 "data_offset": 0, 00:19:52.683 "data_size": 65536 00:19:52.683 } 00:19:52.683 ] 00:19:52.683 }' 00:19:52.683 21:42:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.683 21:42:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@657 -- # local timeout=390 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.684 21:42:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:52.943 "name": "raid_bdev1", 00:19:52.943 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:52.943 "strip_size_kb": 0, 00:19:52.943 "state": "online", 00:19:52.943 "raid_level": "raid1", 00:19:52.943 "superblock": false, 00:19:52.943 "num_base_bdevs": 2, 00:19:52.943 "num_base_bdevs_discovered": 2, 00:19:52.943 "num_base_bdevs_operational": 2, 00:19:52.943 "process": { 00:19:52.943 "type": "rebuild", 00:19:52.943 "target": "spare", 00:19:52.943 "progress": { 00:19:52.943 "blocks": 20480, 00:19:52.943 "percent": 31 00:19:52.943 } 00:19:52.943 }, 00:19:52.943 "base_bdevs_list": [ 00:19:52.943 { 00:19:52.943 "name": "spare", 00:19:52.943 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:52.943 "is_configured": true, 00:19:52.943 "data_offset": 0, 00:19:52.943 "data_size": 65536 00:19:52.943 }, 00:19:52.943 { 00:19:52.943 "name": "BaseBdev2", 00:19:52.943 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:52.943 "is_configured": true, 00:19:52.943 "data_offset": 0, 00:19:52.943 "data_size": 65536 00:19:52.943 } 00:19:52.943 ] 00:19:52.943 }' 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:52.943 21:42:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:53.204 [2024-12-06 21:42:13.472612] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:19:53.204 [2024-12-06 21:42:13.700553] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:19:53.481 [2024-12-06 21:42:13.811120] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:19:53.757 [2024-12-06 21:42:14.047453] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:19:53.757 [2024-12-06 21:42:14.154668] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.016 21:42:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.016 [2024-12-06 21:42:14.468854] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:19:54.276 [2024-12-06 21:42:14.585584] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:54.276 "name": "raid_bdev1", 00:19:54.276 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:54.276 "strip_size_kb": 0, 00:19:54.276 "state": "online", 00:19:54.276 "raid_level": "raid1", 00:19:54.276 "superblock": false, 00:19:54.276 "num_base_bdevs": 2, 00:19:54.276 "num_base_bdevs_discovered": 2, 00:19:54.276 "num_base_bdevs_operational": 2, 00:19:54.276 "process": { 00:19:54.276 "type": "rebuild", 00:19:54.276 "target": "spare", 00:19:54.276 "progress": { 00:19:54.276 "blocks": 40960, 00:19:54.276 "percent": 62 00:19:54.276 } 00:19:54.276 }, 00:19:54.276 "base_bdevs_list": [ 00:19:54.276 { 00:19:54.276 "name": "spare", 00:19:54.276 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:54.276 "is_configured": true, 00:19:54.276 "data_offset": 0, 00:19:54.276 "data_size": 65536 00:19:54.276 }, 00:19:54.276 { 00:19:54.276 "name": "BaseBdev2", 00:19:54.276 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:54.276 "is_configured": true, 00:19:54.276 "data_offset": 0, 00:19:54.276 "data_size": 65536 00:19:54.276 } 00:19:54.276 ] 00:19:54.276 }' 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.276 21:42:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:54.535 [2024-12-06 21:42:14.920520] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:54.535 [2024-12-06 21:42:14.920856] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:19:55.102 [2024-12-06 21:42:15.467993] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.361 21:42:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.361 [2024-12-06 21:42:15.790648] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:19:55.620 21:42:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.620 "name": "raid_bdev1", 00:19:55.620 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:55.620 "strip_size_kb": 0, 00:19:55.620 "state": "online", 00:19:55.620 "raid_level": "raid1", 00:19:55.620 "superblock": false, 00:19:55.620 "num_base_bdevs": 2, 00:19:55.620 "num_base_bdevs_discovered": 2, 00:19:55.620 "num_base_bdevs_operational": 2, 00:19:55.620 "process": { 00:19:55.620 "type": "rebuild", 00:19:55.620 "target": "spare", 00:19:55.620 "progress": { 00:19:55.620 "blocks": 59392, 00:19:55.620 "percent": 90 00:19:55.620 } 00:19:55.620 }, 00:19:55.620 "base_bdevs_list": [ 00:19:55.620 { 00:19:55.620 "name": "spare", 00:19:55.620 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:55.620 "is_configured": true, 00:19:55.620 "data_offset": 0, 00:19:55.620 "data_size": 65536 00:19:55.620 }, 00:19:55.620 { 00:19:55.620 "name": "BaseBdev2", 00:19:55.620 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:55.620 "is_configured": true, 00:19:55.620 "data_offset": 0, 00:19:55.620 "data_size": 65536 00:19:55.620 } 00:19:55.620 ] 00:19:55.620 }' 00:19:55.620 21:42:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.620 21:42:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.620 21:42:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.621 21:42:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.621 21:42:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:55.880 [2024-12-06 21:42:16.231013] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:55.880 [2024-12-06 21:42:16.331065] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:55.880 [2024-12-06 21:42:16.332892] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.817 21:42:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.817 "name": "raid_bdev1", 00:19:56.817 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:56.817 "strip_size_kb": 0, 00:19:56.817 "state": "online", 00:19:56.817 "raid_level": "raid1", 00:19:56.817 "superblock": false, 00:19:56.817 "num_base_bdevs": 2, 00:19:56.817 "num_base_bdevs_discovered": 2, 00:19:56.817 "num_base_bdevs_operational": 2, 00:19:56.817 "base_bdevs_list": [ 00:19:56.817 { 00:19:56.817 "name": "spare", 00:19:56.817 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:56.817 "is_configured": true, 00:19:56.817 "data_offset": 0, 00:19:56.817 "data_size": 65536 00:19:56.817 }, 00:19:56.817 { 00:19:56.817 "name": "BaseBdev2", 00:19:56.817 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:56.817 "is_configured": true, 00:19:56.817 "data_offset": 0, 00:19:56.817 "data_size": 65536 00:19:56.817 } 00:19:56.817 ] 00:19:56.817 }' 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@660 -- # break 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.817 21:42:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.076 21:42:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:57.076 "name": "raid_bdev1", 00:19:57.076 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:57.076 "strip_size_kb": 0, 00:19:57.076 "state": "online", 00:19:57.076 "raid_level": "raid1", 00:19:57.076 "superblock": false, 00:19:57.077 "num_base_bdevs": 2, 00:19:57.077 "num_base_bdevs_discovered": 2, 00:19:57.077 "num_base_bdevs_operational": 2, 00:19:57.077 "base_bdevs_list": [ 00:19:57.077 { 00:19:57.077 "name": "spare", 00:19:57.077 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:57.077 "is_configured": true, 00:19:57.077 "data_offset": 0, 00:19:57.077 "data_size": 65536 00:19:57.077 }, 00:19:57.077 { 00:19:57.077 "name": "BaseBdev2", 00:19:57.077 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:57.077 "is_configured": true, 00:19:57.077 "data_offset": 0, 00:19:57.077 "data_size": 65536 00:19:57.077 } 00:19:57.077 ] 00:19:57.077 }' 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:57.077 21:42:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.335 21:42:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:57.335 "name": "raid_bdev1", 00:19:57.335 "uuid": "30131713-7833-4ef2-b0ce-26d6a18adcd2", 00:19:57.335 "strip_size_kb": 0, 00:19:57.335 "state": "online", 00:19:57.335 "raid_level": "raid1", 00:19:57.335 "superblock": false, 00:19:57.335 "num_base_bdevs": 2, 00:19:57.335 "num_base_bdevs_discovered": 2, 00:19:57.335 "num_base_bdevs_operational": 2, 00:19:57.335 "base_bdevs_list": [ 00:19:57.335 { 00:19:57.335 "name": "spare", 00:19:57.335 "uuid": "c409c2ca-1965-5b21-b1e1-3546dd5aa1f6", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 }, 00:19:57.335 { 00:19:57.335 "name": "BaseBdev2", 00:19:57.335 "uuid": "2ecd08a2-c2c5-4256-9706-95e262775fac", 00:19:57.335 "is_configured": true, 00:19:57.335 "data_offset": 0, 00:19:57.335 "data_size": 65536 00:19:57.335 } 00:19:57.335 ] 00:19:57.335 }' 00:19:57.335 21:42:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:57.335 21:42:17 -- common/autotest_common.sh@10 -- # set +x 00:19:57.900 21:42:18 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:57.900 [2024-12-06 21:42:18.292236] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:57.900 [2024-12-06 21:42:18.292278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:57.900 00:19:57.900 Latency(us) 00:19:57.900 [2024-12-06T21:42:18.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.900 [2024-12-06T21:42:18.397Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:19:57.901 raid_bdev1 : 10.11 99.25 297.74 0.00 0.00 13470.15 256.93 126782.37 00:19:57.901 [2024-12-06T21:42:18.398Z] =================================================================================================================== 00:19:57.901 [2024-12-06T21:42:18.398Z] Total : 99.25 297.74 0.00 0.00 13470.15 256.93 126782.37 00:19:57.901 0 00:19:57.901 [2024-12-06 21:42:18.336885] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:57.901 [2024-12-06 21:42:18.336924] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:57.901 [2024-12-06 21:42:18.337003] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:57.901 [2024-12-06 21:42:18.337018] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008180 name raid_bdev1, state offline 00:19:57.901 21:42:18 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:57.901 21:42:18 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:58.158 21:42:18 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:58.158 21:42:18 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:19:58.158 21:42:18 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:19:58.158 21:42:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.158 21:42:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:19:58.158 21:42:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.158 21:42:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:58.158 21:42:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.159 21:42:18 -- bdev/nbd_common.sh@12 -- # local i 00:19:58.159 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.159 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.159 21:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:19:58.418 /dev/nbd0 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:58.418 21:42:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:19:58.418 21:42:18 -- common/autotest_common.sh@867 -- # local i 00:19:58.418 21:42:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:58.418 21:42:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:58.418 21:42:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:19:58.418 21:42:18 -- common/autotest_common.sh@871 -- # break 00:19:58.418 21:42:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:58.418 21:42:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:58.418 21:42:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.418 1+0 records in 00:19:58.418 1+0 records out 00:19:58.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288028 s, 14.2 MB/s 00:19:58.418 21:42:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.418 21:42:18 -- common/autotest_common.sh@884 -- # size=4096 00:19:58.418 21:42:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.418 21:42:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:58.418 21:42:18 -- common/autotest_common.sh@887 -- # return 0 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.418 21:42:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:19:58.418 21:42:18 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:19:58.418 21:42:18 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@12 -- # local i 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.418 21:42:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:19:58.677 /dev/nbd1 00:19:58.677 21:42:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:58.677 21:42:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:58.677 21:42:19 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:19:58.677 21:42:19 -- common/autotest_common.sh@867 -- # local i 00:19:58.677 21:42:19 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:19:58.677 21:42:19 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:19:58.677 21:42:19 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:19:58.677 21:42:19 -- common/autotest_common.sh@871 -- # break 00:19:58.677 21:42:19 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:19:58.677 21:42:19 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:19:58.678 21:42:19 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:58.678 1+0 records in 00:19:58.678 1+0 records out 00:19:58.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413808 s, 9.9 MB/s 00:19:58.678 21:42:19 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.678 21:42:19 -- common/autotest_common.sh@884 -- # size=4096 00:19:58.678 21:42:19 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:58.678 21:42:19 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:19:58.678 21:42:19 -- common/autotest_common.sh@887 -- # return 0 00:19:58.678 21:42:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:58.678 21:42:19 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:58.678 21:42:19 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:58.937 21:42:19 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@51 -- # local i 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:58.937 21:42:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@41 -- # break 00:19:59.196 21:42:19 -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.196 21:42:19 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@51 -- # local i 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@41 -- # break 00:19:59.197 21:42:19 -- bdev/nbd_common.sh@45 -- # return 0 00:19:59.197 21:42:19 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:19:59.197 21:42:19 -- bdev/bdev_raid.sh@709 -- # killprocess 79077 00:19:59.197 21:42:19 -- common/autotest_common.sh@936 -- # '[' -z 79077 ']' 00:19:59.197 21:42:19 -- common/autotest_common.sh@940 -- # kill -0 79077 00:19:59.197 21:42:19 -- common/autotest_common.sh@941 -- # uname 00:19:59.197 21:42:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.197 21:42:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79077 00:19:59.455 21:42:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.455 21:42:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.455 21:42:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79077' 00:19:59.455 killing process with pid 79077 00:19:59.455 21:42:19 -- common/autotest_common.sh@955 -- # kill 79077 00:19:59.455 Received shutdown signal, test time was about 11.498572 seconds 00:19:59.455 00:19:59.455 Latency(us) 00:19:59.455 [2024-12-06T21:42:19.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.455 [2024-12-06T21:42:19.952Z] =================================================================================================================== 00:19:59.455 [2024-12-06T21:42:19.952Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:59.455 [2024-12-06 21:42:19.712468] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:59.455 21:42:19 -- common/autotest_common.sh@960 -- # wait 79077 00:19:59.455 [2024-12-06 21:42:19.874394] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:00.833 00:20:00.833 real 0m16.352s 00:20:00.833 user 0m23.389s 00:20:00.833 sys 0m1.925s 00:20:00.833 21:42:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:00.833 ************************************ 00:20:00.833 END TEST raid_rebuild_test_io 00:20:00.833 ************************************ 00:20:00.833 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:00.833 21:42:20 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:00.833 21:42:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:00.833 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:20:00.833 ************************************ 00:20:00.833 START TEST raid_rebuild_test_sb_io 00:20:00.833 ************************************ 00:20:00.833 21:42:20 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 2 true true 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=79516 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 79516 /var/tmp/spdk-raid.sock 00:20:00.833 21:42:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:00.833 21:42:20 -- common/autotest_common.sh@829 -- # '[' -z 79516 ']' 00:20:00.833 21:42:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:00.833 21:42:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:00.833 21:42:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:00.833 21:42:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.833 21:42:20 -- common/autotest_common.sh@10 -- # set +x 00:20:00.833 [2024-12-06 21:42:21.011709] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:00.833 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:00.833 Zero copy mechanism will not be used. 00:20:00.833 [2024-12-06 21:42:21.011866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79516 ] 00:20:00.833 [2024-12-06 21:42:21.163726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.833 [2024-12-06 21:42:21.321389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.092 [2024-12-06 21:42:21.479726] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:01.661 21:42:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:01.661 21:42:21 -- common/autotest_common.sh@862 -- # return 0 00:20:01.661 21:42:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.661 21:42:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:01.661 21:42:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:01.921 BaseBdev1_malloc 00:20:01.921 21:42:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:01.921 [2024-12-06 21:42:22.388047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:01.921 [2024-12-06 21:42:22.388133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.921 [2024-12-06 21:42:22.388212] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:01.921 [2024-12-06 21:42:22.388230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.921 [2024-12-06 21:42:22.390834] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.921 [2024-12-06 21:42:22.390892] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:01.921 BaseBdev1 00:20:01.921 21:42:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:01.921 21:42:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:01.921 21:42:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:02.181 BaseBdev2_malloc 00:20:02.181 21:42:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:02.439 [2024-12-06 21:42:22.826312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:02.439 [2024-12-06 21:42:22.826395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:02.439 [2024-12-06 21:42:22.826434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:02.439 [2024-12-06 21:42:22.826498] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:02.439 [2024-12-06 21:42:22.828872] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:02.439 [2024-12-06 21:42:22.828928] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:02.439 BaseBdev2 00:20:02.439 21:42:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:02.697 spare_malloc 00:20:02.697 21:42:23 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:02.955 spare_delay 00:20:02.955 21:42:23 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:03.224 [2024-12-06 21:42:23.503178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:03.224 [2024-12-06 21:42:23.503252] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.224 [2024-12-06 21:42:23.503280] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:03.224 [2024-12-06 21:42:23.503296] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.224 [2024-12-06 21:42:23.505521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.224 [2024-12-06 21:42:23.505574] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:03.224 spare 00:20:03.224 21:42:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:03.482 [2024-12-06 21:42:23.739330] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:03.482 [2024-12-06 21:42:23.741463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:03.482 [2024-12-06 21:42:23.741705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:03.482 [2024-12-06 21:42:23.741757] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:03.482 [2024-12-06 21:42:23.741927] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:20:03.482 [2024-12-06 21:42:23.742327] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:03.482 [2024-12-06 21:42:23.742354] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:03.482 [2024-12-06 21:42:23.742576] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:03.482 21:42:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:03.483 "name": "raid_bdev1", 00:20:03.483 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:03.483 "strip_size_kb": 0, 00:20:03.483 "state": "online", 00:20:03.483 "raid_level": "raid1", 00:20:03.483 "superblock": true, 00:20:03.483 "num_base_bdevs": 2, 00:20:03.483 "num_base_bdevs_discovered": 2, 00:20:03.483 "num_base_bdevs_operational": 2, 00:20:03.483 "base_bdevs_list": [ 00:20:03.483 { 00:20:03.483 "name": "BaseBdev1", 00:20:03.483 "uuid": "80d5a4ff-893e-5ddc-828e-429ddcb73f94", 00:20:03.483 "is_configured": true, 00:20:03.483 "data_offset": 2048, 00:20:03.483 "data_size": 63488 00:20:03.483 }, 00:20:03.483 { 00:20:03.483 "name": "BaseBdev2", 00:20:03.483 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:03.483 "is_configured": true, 00:20:03.483 "data_offset": 2048, 00:20:03.483 "data_size": 63488 00:20:03.483 } 00:20:03.483 ] 00:20:03.483 }' 00:20:03.483 21:42:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:03.483 21:42:23 -- common/autotest_common.sh@10 -- # set +x 00:20:04.049 21:42:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:04.049 21:42:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:04.049 [2024-12-06 21:42:24.475789] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:04.049 21:42:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:04.049 21:42:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:04.049 21:42:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.307 21:42:24 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:04.307 21:42:24 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:04.307 21:42:24 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:04.307 21:42:24 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:04.565 [2024-12-06 21:42:24.829858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:20:04.565 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:04.565 Zero copy mechanism will not be used. 00:20:04.565 Running I/O for 60 seconds... 00:20:04.565 [2024-12-06 21:42:24.981291] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:04.565 [2024-12-06 21:42:24.994360] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:04.565 21:42:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.824 21:42:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:04.824 "name": "raid_bdev1", 00:20:04.824 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:04.824 "strip_size_kb": 0, 00:20:04.824 "state": "online", 00:20:04.824 "raid_level": "raid1", 00:20:04.824 "superblock": true, 00:20:04.824 "num_base_bdevs": 2, 00:20:04.824 "num_base_bdevs_discovered": 1, 00:20:04.824 "num_base_bdevs_operational": 1, 00:20:04.824 "base_bdevs_list": [ 00:20:04.824 { 00:20:04.824 "name": null, 00:20:04.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.824 "is_configured": false, 00:20:04.824 "data_offset": 2048, 00:20:04.824 "data_size": 63488 00:20:04.824 }, 00:20:04.824 { 00:20:04.824 "name": "BaseBdev2", 00:20:04.824 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:04.824 "is_configured": true, 00:20:04.824 "data_offset": 2048, 00:20:04.824 "data_size": 63488 00:20:04.824 } 00:20:04.824 ] 00:20:04.824 }' 00:20:04.824 21:42:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:04.824 21:42:25 -- common/autotest_common.sh@10 -- # set +x 00:20:05.083 21:42:25 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:05.342 [2024-12-06 21:42:25.745242] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:05.342 [2024-12-06 21:42:25.745314] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:05.342 [2024-12-06 21:42:25.794623] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:05.342 [2024-12-06 21:42:25.796788] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:05.342 21:42:25 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:05.600 [2024-12-06 21:42:25.913002] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:05.600 [2024-12-06 21:42:25.913464] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:05.600 [2024-12-06 21:42:26.023905] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:05.600 [2024-12-06 21:42:26.024104] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:06.166 [2024-12-06 21:42:26.376944] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:06.166 [2024-12-06 21:42:26.377419] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:06.166 [2024-12-06 21:42:26.510011] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.425 21:42:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.425 [2024-12-06 21:42:26.840034] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:06.683 "name": "raid_bdev1", 00:20:06.683 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:06.683 "strip_size_kb": 0, 00:20:06.683 "state": "online", 00:20:06.683 "raid_level": "raid1", 00:20:06.683 "superblock": true, 00:20:06.683 "num_base_bdevs": 2, 00:20:06.683 "num_base_bdevs_discovered": 2, 00:20:06.683 "num_base_bdevs_operational": 2, 00:20:06.683 "process": { 00:20:06.683 "type": "rebuild", 00:20:06.683 "target": "spare", 00:20:06.683 "progress": { 00:20:06.683 "blocks": 14336, 00:20:06.683 "percent": 22 00:20:06.683 } 00:20:06.683 }, 00:20:06.683 "base_bdevs_list": [ 00:20:06.683 { 00:20:06.683 "name": "spare", 00:20:06.683 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:06.683 "is_configured": true, 00:20:06.683 "data_offset": 2048, 00:20:06.683 "data_size": 63488 00:20:06.683 }, 00:20:06.683 { 00:20:06.683 "name": "BaseBdev2", 00:20:06.683 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:06.683 "is_configured": true, 00:20:06.683 "data_offset": 2048, 00:20:06.683 "data_size": 63488 00:20:06.683 } 00:20:06.683 ] 00:20:06.683 }' 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:06.683 [2024-12-06 21:42:27.055431] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:06.683 [2024-12-06 21:42:27.055749] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:06.683 21:42:27 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:06.941 [2024-12-06 21:42:27.275804] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:06.942 [2024-12-06 21:42:27.331822] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:06.942 [2024-12-06 21:42:27.347750] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:06.942 [2024-12-06 21:42:27.384858] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005790 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.942 21:42:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.200 21:42:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:07.200 "name": "raid_bdev1", 00:20:07.201 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:07.201 "strip_size_kb": 0, 00:20:07.201 "state": "online", 00:20:07.201 "raid_level": "raid1", 00:20:07.201 "superblock": true, 00:20:07.201 "num_base_bdevs": 2, 00:20:07.201 "num_base_bdevs_discovered": 1, 00:20:07.201 "num_base_bdevs_operational": 1, 00:20:07.201 "base_bdevs_list": [ 00:20:07.201 { 00:20:07.201 "name": null, 00:20:07.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.201 "is_configured": false, 00:20:07.201 "data_offset": 2048, 00:20:07.201 "data_size": 63488 00:20:07.201 }, 00:20:07.201 { 00:20:07.201 "name": "BaseBdev2", 00:20:07.201 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:07.201 "is_configured": true, 00:20:07.201 "data_offset": 2048, 00:20:07.201 "data_size": 63488 00:20:07.201 } 00:20:07.201 ] 00:20:07.201 }' 00:20:07.201 21:42:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:07.201 21:42:27 -- common/autotest_common.sh@10 -- # set +x 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:07.769 "name": "raid_bdev1", 00:20:07.769 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:07.769 "strip_size_kb": 0, 00:20:07.769 "state": "online", 00:20:07.769 "raid_level": "raid1", 00:20:07.769 "superblock": true, 00:20:07.769 "num_base_bdevs": 2, 00:20:07.769 "num_base_bdevs_discovered": 1, 00:20:07.769 "num_base_bdevs_operational": 1, 00:20:07.769 "base_bdevs_list": [ 00:20:07.769 { 00:20:07.769 "name": null, 00:20:07.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:07.769 "is_configured": false, 00:20:07.769 "data_offset": 2048, 00:20:07.769 "data_size": 63488 00:20:07.769 }, 00:20:07.769 { 00:20:07.769 "name": "BaseBdev2", 00:20:07.769 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:07.769 "is_configured": true, 00:20:07.769 "data_offset": 2048, 00:20:07.769 "data_size": 63488 00:20:07.769 } 00:20:07.769 ] 00:20:07.769 }' 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:07.769 21:42:28 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:08.029 [2024-12-06 21:42:28.425191] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:08.029 [2024-12-06 21:42:28.425258] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:08.029 21:42:28 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:08.029 [2024-12-06 21:42:28.493383] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:20:08.029 [2024-12-06 21:42:28.495544] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:08.288 [2024-12-06 21:42:28.604678] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.288 [2024-12-06 21:42:28.605186] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:08.288 [2024-12-06 21:42:28.716378] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.288 [2024-12-06 21:42:28.716655] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:08.548 [2024-12-06 21:42:29.043087] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:08.548 [2024-12-06 21:42:29.043658] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:08.807 [2024-12-06 21:42:29.159379] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:08.807 [2024-12-06 21:42:29.159601] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.066 21:42:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.066 [2024-12-06 21:42:29.517305] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:09.066 [2024-12-06 21:42:29.517569] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:09.325 21:42:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:09.325 "name": "raid_bdev1", 00:20:09.325 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:09.325 "strip_size_kb": 0, 00:20:09.325 "state": "online", 00:20:09.325 "raid_level": "raid1", 00:20:09.325 "superblock": true, 00:20:09.325 "num_base_bdevs": 2, 00:20:09.325 "num_base_bdevs_discovered": 2, 00:20:09.325 "num_base_bdevs_operational": 2, 00:20:09.325 "process": { 00:20:09.325 "type": "rebuild", 00:20:09.325 "target": "spare", 00:20:09.325 "progress": { 00:20:09.325 "blocks": 18432, 00:20:09.325 "percent": 29 00:20:09.325 } 00:20:09.325 }, 00:20:09.325 "base_bdevs_list": [ 00:20:09.325 { 00:20:09.325 "name": "spare", 00:20:09.325 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:09.325 "is_configured": true, 00:20:09.325 "data_offset": 2048, 00:20:09.325 "data_size": 63488 00:20:09.325 }, 00:20:09.325 { 00:20:09.325 "name": "BaseBdev2", 00:20:09.325 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:09.325 "is_configured": true, 00:20:09.325 "data_offset": 2048, 00:20:09.325 "data_size": 63488 00:20:09.325 } 00:20:09.325 ] 00:20:09.325 }' 00:20:09.325 21:42:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:09.325 21:42:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.325 21:42:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:09.325 21:42:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:09.326 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@657 -- # local timeout=406 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:09.326 21:42:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:09.326 [2024-12-06 21:42:29.763314] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:09.586 "name": "raid_bdev1", 00:20:09.586 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:09.586 "strip_size_kb": 0, 00:20:09.586 "state": "online", 00:20:09.586 "raid_level": "raid1", 00:20:09.586 "superblock": true, 00:20:09.586 "num_base_bdevs": 2, 00:20:09.586 "num_base_bdevs_discovered": 2, 00:20:09.586 "num_base_bdevs_operational": 2, 00:20:09.586 "process": { 00:20:09.586 "type": "rebuild", 00:20:09.586 "target": "spare", 00:20:09.586 "progress": { 00:20:09.586 "blocks": 24576, 00:20:09.586 "percent": 38 00:20:09.586 } 00:20:09.586 }, 00:20:09.586 "base_bdevs_list": [ 00:20:09.586 { 00:20:09.586 "name": "spare", 00:20:09.586 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:09.586 "is_configured": true, 00:20:09.586 "data_offset": 2048, 00:20:09.586 "data_size": 63488 00:20:09.586 }, 00:20:09.586 { 00:20:09.586 "name": "BaseBdev2", 00:20:09.586 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:09.586 "is_configured": true, 00:20:09.586 "data_offset": 2048, 00:20:09.586 "data_size": 63488 00:20:09.586 } 00:20:09.586 ] 00:20:09.586 }' 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:09.586 21:42:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:09.845 [2024-12-06 21:42:30.107135] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:10.413 [2024-12-06 21:42:30.818735] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:10.672 [2024-12-06 21:42:31.048588] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:10.672 21:42:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:10.931 "name": "raid_bdev1", 00:20:10.931 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:10.931 "strip_size_kb": 0, 00:20:10.931 "state": "online", 00:20:10.931 "raid_level": "raid1", 00:20:10.931 "superblock": true, 00:20:10.931 "num_base_bdevs": 2, 00:20:10.931 "num_base_bdevs_discovered": 2, 00:20:10.931 "num_base_bdevs_operational": 2, 00:20:10.931 "process": { 00:20:10.931 "type": "rebuild", 00:20:10.931 "target": "spare", 00:20:10.931 "progress": { 00:20:10.931 "blocks": 45056, 00:20:10.931 "percent": 70 00:20:10.931 } 00:20:10.931 }, 00:20:10.931 "base_bdevs_list": [ 00:20:10.931 { 00:20:10.931 "name": "spare", 00:20:10.931 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:10.931 "is_configured": true, 00:20:10.931 "data_offset": 2048, 00:20:10.931 "data_size": 63488 00:20:10.931 }, 00:20:10.931 { 00:20:10.931 "name": "BaseBdev2", 00:20:10.931 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:10.931 "is_configured": true, 00:20:10.931 "data_offset": 2048, 00:20:10.931 "data_size": 63488 00:20:10.931 } 00:20:10.931 ] 00:20:10.931 }' 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:10.931 21:42:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:11.190 [2024-12-06 21:42:31.585751] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.128 [2024-12-06 21:42:32.380574] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:12.128 [2024-12-06 21:42:32.480598] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:12.128 [2024-12-06 21:42:32.482236] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.128 "name": "raid_bdev1", 00:20:12.128 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:12.128 "strip_size_kb": 0, 00:20:12.128 "state": "online", 00:20:12.128 "raid_level": "raid1", 00:20:12.128 "superblock": true, 00:20:12.128 "num_base_bdevs": 2, 00:20:12.128 "num_base_bdevs_discovered": 2, 00:20:12.128 "num_base_bdevs_operational": 2, 00:20:12.128 "base_bdevs_list": [ 00:20:12.128 { 00:20:12.128 "name": "spare", 00:20:12.128 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:12.128 "is_configured": true, 00:20:12.128 "data_offset": 2048, 00:20:12.128 "data_size": 63488 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "name": "BaseBdev2", 00:20:12.128 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:12.128 "is_configured": true, 00:20:12.128 "data_offset": 2048, 00:20:12.128 "data_size": 63488 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }' 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@660 -- # break 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.128 21:42:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:12.388 "name": "raid_bdev1", 00:20:12.388 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:12.388 "strip_size_kb": 0, 00:20:12.388 "state": "online", 00:20:12.388 "raid_level": "raid1", 00:20:12.388 "superblock": true, 00:20:12.388 "num_base_bdevs": 2, 00:20:12.388 "num_base_bdevs_discovered": 2, 00:20:12.388 "num_base_bdevs_operational": 2, 00:20:12.388 "base_bdevs_list": [ 00:20:12.388 { 00:20:12.388 "name": "spare", 00:20:12.388 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:12.388 "is_configured": true, 00:20:12.388 "data_offset": 2048, 00:20:12.388 "data_size": 63488 00:20:12.388 }, 00:20:12.388 { 00:20:12.388 "name": "BaseBdev2", 00:20:12.388 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:12.388 "is_configured": true, 00:20:12.388 "data_offset": 2048, 00:20:12.388 "data_size": 63488 00:20:12.388 } 00:20:12.388 ] 00:20:12.388 }' 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.388 21:42:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.648 21:42:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.648 "name": "raid_bdev1", 00:20:12.648 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:12.648 "strip_size_kb": 0, 00:20:12.648 "state": "online", 00:20:12.648 "raid_level": "raid1", 00:20:12.648 "superblock": true, 00:20:12.648 "num_base_bdevs": 2, 00:20:12.648 "num_base_bdevs_discovered": 2, 00:20:12.648 "num_base_bdevs_operational": 2, 00:20:12.648 "base_bdevs_list": [ 00:20:12.648 { 00:20:12.648 "name": "spare", 00:20:12.648 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:12.648 "is_configured": true, 00:20:12.648 "data_offset": 2048, 00:20:12.648 "data_size": 63488 00:20:12.648 }, 00:20:12.648 { 00:20:12.648 "name": "BaseBdev2", 00:20:12.648 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:12.648 "is_configured": true, 00:20:12.648 "data_offset": 2048, 00:20:12.648 "data_size": 63488 00:20:12.648 } 00:20:12.648 ] 00:20:12.648 }' 00:20:12.648 21:42:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.648 21:42:33 -- common/autotest_common.sh@10 -- # set +x 00:20:12.908 21:42:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:13.167 [2024-12-06 21:42:33.644754] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:13.167 [2024-12-06 21:42:33.644956] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:13.429 00:20:13.429 Latency(us) 00:20:13.429 [2024-12-06T21:42:33.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.429 [2024-12-06T21:42:33.926Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:13.429 raid_bdev1 : 8.87 108.42 325.27 0.00 0.00 11599.47 262.52 115819.99 00:20:13.429 [2024-12-06T21:42:33.926Z] =================================================================================================================== 00:20:13.429 [2024-12-06T21:42:33.926Z] Total : 108.42 325.27 0.00 0.00 11599.47 262.52 115819.99 00:20:13.429 [2024-12-06 21:42:33.720995] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:13.429 [2024-12-06 21:42:33.721211] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:13.429 0 00:20:13.429 [2024-12-06 21:42:33.721351] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:13.429 [2024-12-06 21:42:33.721370] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:13.429 21:42:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:13.429 21:42:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:13.702 21:42:34 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:13.702 21:42:34 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:13.702 21:42:34 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@12 -- # local i 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.702 21:42:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:13.975 /dev/nbd0 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:13.975 21:42:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:13.975 21:42:34 -- common/autotest_common.sh@867 -- # local i 00:20:13.975 21:42:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:13.975 21:42:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:13.975 21:42:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:13.975 21:42:34 -- common/autotest_common.sh@871 -- # break 00:20:13.975 21:42:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:13.975 21:42:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:13.975 21:42:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:13.975 1+0 records in 00:20:13.975 1+0 records out 00:20:13.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242424 s, 16.9 MB/s 00:20:13.975 21:42:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.975 21:42:34 -- common/autotest_common.sh@884 -- # size=4096 00:20:13.975 21:42:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:13.975 21:42:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:13.975 21:42:34 -- common/autotest_common.sh@887 -- # return 0 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.975 21:42:34 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:13.975 21:42:34 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:13.975 21:42:34 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@12 -- # local i 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:13.975 21:42:34 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:14.234 /dev/nbd1 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:14.234 21:42:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:14.234 21:42:34 -- common/autotest_common.sh@867 -- # local i 00:20:14.234 21:42:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:14.234 21:42:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:14.234 21:42:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:14.234 21:42:34 -- common/autotest_common.sh@871 -- # break 00:20:14.234 21:42:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:14.234 21:42:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:14.234 21:42:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:14.234 1+0 records in 00:20:14.234 1+0 records out 00:20:14.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232393 s, 17.6 MB/s 00:20:14.234 21:42:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.234 21:42:34 -- common/autotest_common.sh@884 -- # size=4096 00:20:14.234 21:42:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:14.234 21:42:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:14.234 21:42:34 -- common/autotest_common.sh@887 -- # return 0 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:14.234 21:42:34 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:14.234 21:42:34 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@51 -- # local i 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.234 21:42:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@41 -- # break 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.493 21:42:34 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@51 -- # local i 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:14.493 21:42:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@41 -- # break 00:20:14.754 21:42:35 -- bdev/nbd_common.sh@45 -- # return 0 00:20:14.754 21:42:35 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:14.754 21:42:35 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:14.754 21:42:35 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:14.754 21:42:35 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:15.017 21:42:35 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:15.017 [2024-12-06 21:42:35.512326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:15.017 [2024-12-06 21:42:35.512434] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.017 [2024-12-06 21:42:35.512525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:15.017 [2024-12-06 21:42:35.512558] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.274 [2024-12-06 21:42:35.515262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.274 [2024-12-06 21:42:35.515304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:15.274 [2024-12-06 21:42:35.515424] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:15.274 [2024-12-06 21:42:35.515532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:15.274 BaseBdev1 00:20:15.274 21:42:35 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:15.274 21:42:35 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:15.274 21:42:35 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:15.531 21:42:35 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:15.531 [2024-12-06 21:42:36.016603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:15.531 [2024-12-06 21:42:36.016707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.531 [2024-12-06 21:42:36.016744] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:20:15.531 [2024-12-06 21:42:36.016758] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.531 [2024-12-06 21:42:36.017239] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.531 [2024-12-06 21:42:36.017264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:15.531 [2024-12-06 21:42:36.017379] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:15.531 [2024-12-06 21:42:36.017397] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:15.531 [2024-12-06 21:42:36.017410] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.531 [2024-12-06 21:42:36.017433] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state configuring 00:20:15.531 [2024-12-06 21:42:36.017581] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:15.531 BaseBdev2 00:20:15.788 21:42:36 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:15.788 21:42:36 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:16.046 [2024-12-06 21:42:36.424764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:16.046 [2024-12-06 21:42:36.424871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.046 [2024-12-06 21:42:36.424902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:20:16.046 [2024-12-06 21:42:36.424918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.046 [2024-12-06 21:42:36.425364] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.046 [2024-12-06 21:42:36.425390] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:16.046 [2024-12-06 21:42:36.425734] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:16.046 [2024-12-06 21:42:36.425826] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.046 spare 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.046 21:42:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.046 [2024-12-06 21:42:36.526069] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:20:16.046 [2024-12-06 21:42:36.526109] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:16.046 [2024-12-06 21:42:36.526285] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a7e0 00:20:16.046 [2024-12-06 21:42:36.526788] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:20:16.046 [2024-12-06 21:42:36.526806] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:20:16.046 [2024-12-06 21:42:36.527010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.304 21:42:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:16.304 "name": "raid_bdev1", 00:20:16.304 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:16.304 "strip_size_kb": 0, 00:20:16.304 "state": "online", 00:20:16.304 "raid_level": "raid1", 00:20:16.304 "superblock": true, 00:20:16.304 "num_base_bdevs": 2, 00:20:16.304 "num_base_bdevs_discovered": 2, 00:20:16.304 "num_base_bdevs_operational": 2, 00:20:16.304 "base_bdevs_list": [ 00:20:16.304 { 00:20:16.304 "name": "spare", 00:20:16.304 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:16.304 "is_configured": true, 00:20:16.304 "data_offset": 2048, 00:20:16.304 "data_size": 63488 00:20:16.304 }, 00:20:16.304 { 00:20:16.304 "name": "BaseBdev2", 00:20:16.304 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:16.304 "is_configured": true, 00:20:16.304 "data_offset": 2048, 00:20:16.304 "data_size": 63488 00:20:16.304 } 00:20:16.304 ] 00:20:16.304 }' 00:20:16.304 21:42:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:16.304 21:42:36 -- common/autotest_common.sh@10 -- # set +x 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.562 21:42:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.820 "name": "raid_bdev1", 00:20:16.820 "uuid": "8db51a6f-aca1-4ca5-9d6a-ac3c0e02a564", 00:20:16.820 "strip_size_kb": 0, 00:20:16.820 "state": "online", 00:20:16.820 "raid_level": "raid1", 00:20:16.820 "superblock": true, 00:20:16.820 "num_base_bdevs": 2, 00:20:16.820 "num_base_bdevs_discovered": 2, 00:20:16.820 "num_base_bdevs_operational": 2, 00:20:16.820 "base_bdevs_list": [ 00:20:16.820 { 00:20:16.820 "name": "spare", 00:20:16.820 "uuid": "f3c12d96-8ace-5cdb-b113-c4f537ac3808", 00:20:16.820 "is_configured": true, 00:20:16.820 "data_offset": 2048, 00:20:16.820 "data_size": 63488 00:20:16.820 }, 00:20:16.820 { 00:20:16.820 "name": "BaseBdev2", 00:20:16.820 "uuid": "83be6dd9-f5cc-5c7e-a7a2-904604758159", 00:20:16.820 "is_configured": true, 00:20:16.820 "data_offset": 2048, 00:20:16.820 "data_size": 63488 00:20:16.820 } 00:20:16.820 ] 00:20:16.820 }' 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.820 21:42:37 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:17.078 21:42:37 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:17.078 21:42:37 -- bdev/bdev_raid.sh@709 -- # killprocess 79516 00:20:17.078 21:42:37 -- common/autotest_common.sh@936 -- # '[' -z 79516 ']' 00:20:17.078 21:42:37 -- common/autotest_common.sh@940 -- # kill -0 79516 00:20:17.078 21:42:37 -- common/autotest_common.sh@941 -- # uname 00:20:17.079 21:42:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:17.079 21:42:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79516 00:20:17.079 killing process with pid 79516 00:20:17.079 Received shutdown signal, test time was about 12.634175 seconds 00:20:17.079 00:20:17.079 Latency(us) 00:20:17.079 [2024-12-06T21:42:37.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.079 [2024-12-06T21:42:37.576Z] =================================================================================================================== 00:20:17.079 [2024-12-06T21:42:37.576Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.079 21:42:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:17.079 21:42:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:17.079 21:42:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79516' 00:20:17.079 21:42:37 -- common/autotest_common.sh@955 -- # kill 79516 00:20:17.079 [2024-12-06 21:42:37.466226] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:17.079 21:42:37 -- common/autotest_common.sh@960 -- # wait 79516 00:20:17.079 [2024-12-06 21:42:37.466310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.079 [2024-12-06 21:42:37.466387] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.079 [2024-12-06 21:42:37.466405] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:20:17.337 [2024-12-06 21:42:37.616231] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:18.716 00:20:18.716 real 0m17.827s 00:20:18.716 user 0m26.959s 00:20:18.716 sys 0m2.242s 00:20:18.716 ************************************ 00:20:18.716 END TEST raid_rebuild_test_sb_io 00:20:18.716 ************************************ 00:20:18.716 21:42:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:18.716 21:42:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:20:18.716 21:42:38 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:18.716 21:42:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.716 21:42:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.716 ************************************ 00:20:18.716 START TEST raid_rebuild_test 00:20:18.716 ************************************ 00:20:18.716 21:42:38 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false false 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:18.716 21:42:38 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:18.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@544 -- # raid_pid=80005 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80005 /var/tmp/spdk-raid.sock 00:20:18.717 21:42:38 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:18.717 21:42:38 -- common/autotest_common.sh@829 -- # '[' -z 80005 ']' 00:20:18.717 21:42:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:18.717 21:42:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.717 21:42:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:18.717 21:42:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.717 21:42:38 -- common/autotest_common.sh@10 -- # set +x 00:20:18.717 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:18.717 Zero copy mechanism will not be used. 00:20:18.717 [2024-12-06 21:42:38.909726] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:18.717 [2024-12-06 21:42:38.909901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80005 ] 00:20:18.717 [2024-12-06 21:42:39.081089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.976 [2024-12-06 21:42:39.244414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.976 [2024-12-06 21:42:39.399640] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:19.546 21:42:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.546 21:42:39 -- common/autotest_common.sh@862 -- # return 0 00:20:19.546 21:42:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.546 21:42:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:19.546 21:42:39 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:19.806 BaseBdev1 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:19.806 BaseBdev2 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:19.806 21:42:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:20:20.066 BaseBdev3 00:20:20.066 21:42:40 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:20.066 21:42:40 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:20.066 21:42:40 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:20:20.326 BaseBdev4 00:20:20.326 21:42:40 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:20.585 spare_malloc 00:20:20.585 21:42:41 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:20.846 spare_delay 00:20:20.846 21:42:41 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:21.105 [2024-12-06 21:42:41.434070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.105 [2024-12-06 21:42:41.434156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.105 [2024-12-06 21:42:41.434186] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:20:21.105 [2024-12-06 21:42:41.434200] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.105 [2024-12-06 21:42:41.437025] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.105 [2024-12-06 21:42:41.437087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.105 spare 00:20:21.105 21:42:41 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:21.365 [2024-12-06 21:42:41.626173] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.365 [2024-12-06 21:42:41.627973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.365 [2024-12-06 21:42:41.628226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:21.365 [2024-12-06 21:42:41.628300] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:21.365 [2024-12-06 21:42:41.628396] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:20:21.365 [2024-12-06 21:42:41.628416] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:21.365 [2024-12-06 21:42:41.628674] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:21.365 [2024-12-06 21:42:41.629062] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:20:21.365 [2024-12-06 21:42:41.629079] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:20:21.365 [2024-12-06 21:42:41.629250] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.365 21:42:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.625 21:42:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.625 "name": "raid_bdev1", 00:20:21.625 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:21.625 "strip_size_kb": 0, 00:20:21.625 "state": "online", 00:20:21.625 "raid_level": "raid1", 00:20:21.625 "superblock": false, 00:20:21.625 "num_base_bdevs": 4, 00:20:21.625 "num_base_bdevs_discovered": 4, 00:20:21.625 "num_base_bdevs_operational": 4, 00:20:21.625 "base_bdevs_list": [ 00:20:21.625 { 00:20:21.625 "name": "BaseBdev1", 00:20:21.625 "uuid": "4ade92de-b4e5-4a10-b9ed-966881002218", 00:20:21.625 "is_configured": true, 00:20:21.625 "data_offset": 0, 00:20:21.625 "data_size": 65536 00:20:21.625 }, 00:20:21.625 { 00:20:21.625 "name": "BaseBdev2", 00:20:21.625 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:21.625 "is_configured": true, 00:20:21.625 "data_offset": 0, 00:20:21.625 "data_size": 65536 00:20:21.625 }, 00:20:21.625 { 00:20:21.625 "name": "BaseBdev3", 00:20:21.625 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:21.625 "is_configured": true, 00:20:21.625 "data_offset": 0, 00:20:21.625 "data_size": 65536 00:20:21.625 }, 00:20:21.625 { 00:20:21.625 "name": "BaseBdev4", 00:20:21.625 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:21.625 "is_configured": true, 00:20:21.625 "data_offset": 0, 00:20:21.625 "data_size": 65536 00:20:21.625 } 00:20:21.625 ] 00:20:21.625 }' 00:20:21.625 21:42:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.625 21:42:41 -- common/autotest_common.sh@10 -- # set +x 00:20:21.884 21:42:42 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:21.884 21:42:42 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:21.884 [2024-12-06 21:42:42.366594] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:22.143 21:42:42 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@12 -- # local i 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:22.143 21:42:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:22.402 [2024-12-06 21:42:42.850449] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:22.402 /dev/nbd0 00:20:22.402 21:42:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:22.402 21:42:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:22.402 21:42:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:22.402 21:42:42 -- common/autotest_common.sh@867 -- # local i 00:20:22.402 21:42:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:22.402 21:42:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:22.402 21:42:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:22.402 21:42:42 -- common/autotest_common.sh@871 -- # break 00:20:22.402 21:42:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:22.403 21:42:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:22.403 21:42:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.403 1+0 records in 00:20:22.403 1+0 records out 00:20:22.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252378 s, 16.2 MB/s 00:20:22.403 21:42:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.403 21:42:42 -- common/autotest_common.sh@884 -- # size=4096 00:20:22.403 21:42:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.403 21:42:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:22.403 21:42:42 -- common/autotest_common.sh@887 -- # return 0 00:20:22.403 21:42:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.403 21:42:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:22.403 21:42:42 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:22.403 21:42:42 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:22.403 21:42:42 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:20:28.968 65536+0 records in 00:20:28.968 65536+0 records out 00:20:28.968 33554432 bytes (34 MB, 32 MiB) copied, 6.34509 s, 5.3 MB/s 00:20:28.968 21:42:49 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@51 -- # local i 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:28.968 21:42:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:29.227 [2024-12-06 21:42:49.485650] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@41 -- # break 00:20:29.227 21:42:49 -- bdev/nbd_common.sh@45 -- # return 0 00:20:29.227 21:42:49 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:29.486 [2024-12-06 21:42:49.742403] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:29.486 "name": "raid_bdev1", 00:20:29.486 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:29.486 "strip_size_kb": 0, 00:20:29.486 "state": "online", 00:20:29.486 "raid_level": "raid1", 00:20:29.486 "superblock": false, 00:20:29.486 "num_base_bdevs": 4, 00:20:29.486 "num_base_bdevs_discovered": 3, 00:20:29.486 "num_base_bdevs_operational": 3, 00:20:29.486 "base_bdevs_list": [ 00:20:29.486 { 00:20:29.486 "name": null, 00:20:29.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:29.486 "is_configured": false, 00:20:29.486 "data_offset": 0, 00:20:29.486 "data_size": 65536 00:20:29.486 }, 00:20:29.486 { 00:20:29.486 "name": "BaseBdev2", 00:20:29.486 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:29.486 "is_configured": true, 00:20:29.486 "data_offset": 0, 00:20:29.486 "data_size": 65536 00:20:29.486 }, 00:20:29.486 { 00:20:29.486 "name": "BaseBdev3", 00:20:29.486 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:29.486 "is_configured": true, 00:20:29.486 "data_offset": 0, 00:20:29.486 "data_size": 65536 00:20:29.486 }, 00:20:29.486 { 00:20:29.486 "name": "BaseBdev4", 00:20:29.486 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:29.486 "is_configured": true, 00:20:29.486 "data_offset": 0, 00:20:29.486 "data_size": 65536 00:20:29.486 } 00:20:29.486 ] 00:20:29.486 }' 00:20:29.486 21:42:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:29.486 21:42:49 -- common/autotest_common.sh@10 -- # set +x 00:20:30.083 21:42:50 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:30.083 [2024-12-06 21:42:50.518761] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:30.083 [2024-12-06 21:42:50.518852] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:30.083 [2024-12-06 21:42:50.530704] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d09620 00:20:30.083 [2024-12-06 21:42:50.532767] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:30.083 21:42:50 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:31.461 "name": "raid_bdev1", 00:20:31.461 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:31.461 "strip_size_kb": 0, 00:20:31.461 "state": "online", 00:20:31.461 "raid_level": "raid1", 00:20:31.461 "superblock": false, 00:20:31.461 "num_base_bdevs": 4, 00:20:31.461 "num_base_bdevs_discovered": 4, 00:20:31.461 "num_base_bdevs_operational": 4, 00:20:31.461 "process": { 00:20:31.461 "type": "rebuild", 00:20:31.461 "target": "spare", 00:20:31.461 "progress": { 00:20:31.461 "blocks": 24576, 00:20:31.461 "percent": 37 00:20:31.461 } 00:20:31.461 }, 00:20:31.461 "base_bdevs_list": [ 00:20:31.461 { 00:20:31.461 "name": "spare", 00:20:31.461 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:31.461 "is_configured": true, 00:20:31.461 "data_offset": 0, 00:20:31.461 "data_size": 65536 00:20:31.461 }, 00:20:31.461 { 00:20:31.461 "name": "BaseBdev2", 00:20:31.461 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:31.461 "is_configured": true, 00:20:31.461 "data_offset": 0, 00:20:31.461 "data_size": 65536 00:20:31.461 }, 00:20:31.461 { 00:20:31.461 "name": "BaseBdev3", 00:20:31.461 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:31.461 "is_configured": true, 00:20:31.461 "data_offset": 0, 00:20:31.461 "data_size": 65536 00:20:31.461 }, 00:20:31.461 { 00:20:31.461 "name": "BaseBdev4", 00:20:31.461 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:31.461 "is_configured": true, 00:20:31.461 "data_offset": 0, 00:20:31.461 "data_size": 65536 00:20:31.461 } 00:20:31.461 ] 00:20:31.461 }' 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:31.461 21:42:51 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:31.719 [2024-12-06 21:42:52.015325] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:31.719 [2024-12-06 21:42:52.040288] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:31.719 [2024-12-06 21:42:52.040633] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.719 21:42:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:31.978 21:42:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:31.978 "name": "raid_bdev1", 00:20:31.978 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:31.978 "strip_size_kb": 0, 00:20:31.978 "state": "online", 00:20:31.978 "raid_level": "raid1", 00:20:31.978 "superblock": false, 00:20:31.978 "num_base_bdevs": 4, 00:20:31.978 "num_base_bdevs_discovered": 3, 00:20:31.978 "num_base_bdevs_operational": 3, 00:20:31.978 "base_bdevs_list": [ 00:20:31.978 { 00:20:31.978 "name": null, 00:20:31.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:31.978 "is_configured": false, 00:20:31.978 "data_offset": 0, 00:20:31.978 "data_size": 65536 00:20:31.978 }, 00:20:31.978 { 00:20:31.978 "name": "BaseBdev2", 00:20:31.978 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:31.978 "is_configured": true, 00:20:31.978 "data_offset": 0, 00:20:31.978 "data_size": 65536 00:20:31.978 }, 00:20:31.978 { 00:20:31.978 "name": "BaseBdev3", 00:20:31.978 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:31.978 "is_configured": true, 00:20:31.978 "data_offset": 0, 00:20:31.978 "data_size": 65536 00:20:31.978 }, 00:20:31.978 { 00:20:31.978 "name": "BaseBdev4", 00:20:31.978 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:31.978 "is_configured": true, 00:20:31.978 "data_offset": 0, 00:20:31.978 "data_size": 65536 00:20:31.978 } 00:20:31.978 ] 00:20:31.978 }' 00:20:31.978 21:42:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:31.978 21:42:52 -- common/autotest_common.sh@10 -- # set +x 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.236 21:42:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:32.495 "name": "raid_bdev1", 00:20:32.495 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:32.495 "strip_size_kb": 0, 00:20:32.495 "state": "online", 00:20:32.495 "raid_level": "raid1", 00:20:32.495 "superblock": false, 00:20:32.495 "num_base_bdevs": 4, 00:20:32.495 "num_base_bdevs_discovered": 3, 00:20:32.495 "num_base_bdevs_operational": 3, 00:20:32.495 "base_bdevs_list": [ 00:20:32.495 { 00:20:32.495 "name": null, 00:20:32.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.495 "is_configured": false, 00:20:32.495 "data_offset": 0, 00:20:32.495 "data_size": 65536 00:20:32.495 }, 00:20:32.495 { 00:20:32.495 "name": "BaseBdev2", 00:20:32.495 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:32.495 "is_configured": true, 00:20:32.495 "data_offset": 0, 00:20:32.495 "data_size": 65536 00:20:32.495 }, 00:20:32.495 { 00:20:32.495 "name": "BaseBdev3", 00:20:32.495 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:32.495 "is_configured": true, 00:20:32.495 "data_offset": 0, 00:20:32.495 "data_size": 65536 00:20:32.495 }, 00:20:32.495 { 00:20:32.495 "name": "BaseBdev4", 00:20:32.495 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:32.495 "is_configured": true, 00:20:32.495 "data_offset": 0, 00:20:32.495 "data_size": 65536 00:20:32.495 } 00:20:32.495 ] 00:20:32.495 }' 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:32.495 21:42:52 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:32.752 [2024-12-06 21:42:53.200217] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:32.753 [2024-12-06 21:42:53.200269] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.753 [2024-12-06 21:42:53.212239] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000d096f0 00:20:32.753 [2024-12-06 21:42:53.214526] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:32.753 21:42:53 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.128 "name": "raid_bdev1", 00:20:34.128 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:34.128 "strip_size_kb": 0, 00:20:34.128 "state": "online", 00:20:34.128 "raid_level": "raid1", 00:20:34.128 "superblock": false, 00:20:34.128 "num_base_bdevs": 4, 00:20:34.128 "num_base_bdevs_discovered": 4, 00:20:34.128 "num_base_bdevs_operational": 4, 00:20:34.128 "process": { 00:20:34.128 "type": "rebuild", 00:20:34.128 "target": "spare", 00:20:34.128 "progress": { 00:20:34.128 "blocks": 24576, 00:20:34.128 "percent": 37 00:20:34.128 } 00:20:34.128 }, 00:20:34.128 "base_bdevs_list": [ 00:20:34.128 { 00:20:34.128 "name": "spare", 00:20:34.128 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:34.128 "is_configured": true, 00:20:34.128 "data_offset": 0, 00:20:34.128 "data_size": 65536 00:20:34.128 }, 00:20:34.128 { 00:20:34.128 "name": "BaseBdev2", 00:20:34.128 "uuid": "3dc14c37-8987-4f05-8380-0ba094e53b53", 00:20:34.128 "is_configured": true, 00:20:34.128 "data_offset": 0, 00:20:34.128 "data_size": 65536 00:20:34.128 }, 00:20:34.128 { 00:20:34.128 "name": "BaseBdev3", 00:20:34.128 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:34.128 "is_configured": true, 00:20:34.128 "data_offset": 0, 00:20:34.128 "data_size": 65536 00:20:34.128 }, 00:20:34.128 { 00:20:34.128 "name": "BaseBdev4", 00:20:34.128 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:34.128 "is_configured": true, 00:20:34.128 "data_offset": 0, 00:20:34.128 "data_size": 65536 00:20:34.128 } 00:20:34.128 ] 00:20:34.128 }' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:34.128 21:42:54 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:34.387 [2024-12-06 21:42:54.752730] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:34.387 [2024-12-06 21:42:54.822800] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000d096f0 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.387 21:42:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.647 "name": "raid_bdev1", 00:20:34.647 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:34.647 "strip_size_kb": 0, 00:20:34.647 "state": "online", 00:20:34.647 "raid_level": "raid1", 00:20:34.647 "superblock": false, 00:20:34.647 "num_base_bdevs": 4, 00:20:34.647 "num_base_bdevs_discovered": 3, 00:20:34.647 "num_base_bdevs_operational": 3, 00:20:34.647 "process": { 00:20:34.647 "type": "rebuild", 00:20:34.647 "target": "spare", 00:20:34.647 "progress": { 00:20:34.647 "blocks": 36864, 00:20:34.647 "percent": 56 00:20:34.647 } 00:20:34.647 }, 00:20:34.647 "base_bdevs_list": [ 00:20:34.647 { 00:20:34.647 "name": "spare", 00:20:34.647 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:34.647 "is_configured": true, 00:20:34.647 "data_offset": 0, 00:20:34.647 "data_size": 65536 00:20:34.647 }, 00:20:34.647 { 00:20:34.647 "name": null, 00:20:34.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.647 "is_configured": false, 00:20:34.647 "data_offset": 0, 00:20:34.647 "data_size": 65536 00:20:34.647 }, 00:20:34.647 { 00:20:34.647 "name": "BaseBdev3", 00:20:34.647 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:34.647 "is_configured": true, 00:20:34.647 "data_offset": 0, 00:20:34.647 "data_size": 65536 00:20:34.647 }, 00:20:34.647 { 00:20:34.647 "name": "BaseBdev4", 00:20:34.647 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:34.647 "is_configured": true, 00:20:34.647 "data_offset": 0, 00:20:34.647 "data_size": 65536 00:20:34.647 } 00:20:34.647 ] 00:20:34.647 }' 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@657 -- # local timeout=432 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.647 21:42:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.904 "name": "raid_bdev1", 00:20:34.904 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:34.904 "strip_size_kb": 0, 00:20:34.904 "state": "online", 00:20:34.904 "raid_level": "raid1", 00:20:34.904 "superblock": false, 00:20:34.904 "num_base_bdevs": 4, 00:20:34.904 "num_base_bdevs_discovered": 3, 00:20:34.904 "num_base_bdevs_operational": 3, 00:20:34.904 "process": { 00:20:34.904 "type": "rebuild", 00:20:34.904 "target": "spare", 00:20:34.904 "progress": { 00:20:34.904 "blocks": 43008, 00:20:34.904 "percent": 65 00:20:34.904 } 00:20:34.904 }, 00:20:34.904 "base_bdevs_list": [ 00:20:34.904 { 00:20:34.904 "name": "spare", 00:20:34.904 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:34.904 "is_configured": true, 00:20:34.904 "data_offset": 0, 00:20:34.904 "data_size": 65536 00:20:34.904 }, 00:20:34.904 { 00:20:34.904 "name": null, 00:20:34.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.904 "is_configured": false, 00:20:34.904 "data_offset": 0, 00:20:34.904 "data_size": 65536 00:20:34.904 }, 00:20:34.904 { 00:20:34.904 "name": "BaseBdev3", 00:20:34.904 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:34.904 "is_configured": true, 00:20:34.904 "data_offset": 0, 00:20:34.904 "data_size": 65536 00:20:34.904 }, 00:20:34.904 { 00:20:34.904 "name": "BaseBdev4", 00:20:34.904 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:34.904 "is_configured": true, 00:20:34.904 "data_offset": 0, 00:20:34.904 "data_size": 65536 00:20:34.904 } 00:20:34.904 ] 00:20:34.904 }' 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.904 21:42:55 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.276 [2024-12-06 21:42:56.429786] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:36.276 [2024-12-06 21:42:56.429865] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:36.276 [2024-12-06 21:42:56.429936] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.276 "name": "raid_bdev1", 00:20:36.276 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:36.276 "strip_size_kb": 0, 00:20:36.276 "state": "online", 00:20:36.276 "raid_level": "raid1", 00:20:36.276 "superblock": false, 00:20:36.276 "num_base_bdevs": 4, 00:20:36.276 "num_base_bdevs_discovered": 3, 00:20:36.276 "num_base_bdevs_operational": 3, 00:20:36.276 "base_bdevs_list": [ 00:20:36.276 { 00:20:36.276 "name": "spare", 00:20:36.276 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:36.276 "is_configured": true, 00:20:36.276 "data_offset": 0, 00:20:36.276 "data_size": 65536 00:20:36.276 }, 00:20:36.276 { 00:20:36.276 "name": null, 00:20:36.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.276 "is_configured": false, 00:20:36.276 "data_offset": 0, 00:20:36.276 "data_size": 65536 00:20:36.276 }, 00:20:36.276 { 00:20:36.276 "name": "BaseBdev3", 00:20:36.276 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:36.276 "is_configured": true, 00:20:36.276 "data_offset": 0, 00:20:36.276 "data_size": 65536 00:20:36.276 }, 00:20:36.276 { 00:20:36.276 "name": "BaseBdev4", 00:20:36.276 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:36.276 "is_configured": true, 00:20:36.276 "data_offset": 0, 00:20:36.276 "data_size": 65536 00:20:36.276 } 00:20:36.276 ] 00:20:36.276 }' 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@660 -- # break 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.276 21:42:56 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.534 "name": "raid_bdev1", 00:20:36.534 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:36.534 "strip_size_kb": 0, 00:20:36.534 "state": "online", 00:20:36.534 "raid_level": "raid1", 00:20:36.534 "superblock": false, 00:20:36.534 "num_base_bdevs": 4, 00:20:36.534 "num_base_bdevs_discovered": 3, 00:20:36.534 "num_base_bdevs_operational": 3, 00:20:36.534 "base_bdevs_list": [ 00:20:36.534 { 00:20:36.534 "name": "spare", 00:20:36.534 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:36.534 "is_configured": true, 00:20:36.534 "data_offset": 0, 00:20:36.534 "data_size": 65536 00:20:36.534 }, 00:20:36.534 { 00:20:36.534 "name": null, 00:20:36.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.534 "is_configured": false, 00:20:36.534 "data_offset": 0, 00:20:36.534 "data_size": 65536 00:20:36.534 }, 00:20:36.534 { 00:20:36.534 "name": "BaseBdev3", 00:20:36.534 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:36.534 "is_configured": true, 00:20:36.534 "data_offset": 0, 00:20:36.534 "data_size": 65536 00:20:36.534 }, 00:20:36.534 { 00:20:36.534 "name": "BaseBdev4", 00:20:36.534 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:36.534 "is_configured": true, 00:20:36.534 "data_offset": 0, 00:20:36.534 "data_size": 65536 00:20:36.534 } 00:20:36.534 ] 00:20:36.534 }' 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:36.534 21:42:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.792 21:42:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:36.792 "name": "raid_bdev1", 00:20:36.792 "uuid": "5690fd55-d748-4031-8128-3acd5ab18639", 00:20:36.792 "strip_size_kb": 0, 00:20:36.792 "state": "online", 00:20:36.792 "raid_level": "raid1", 00:20:36.792 "superblock": false, 00:20:36.792 "num_base_bdevs": 4, 00:20:36.792 "num_base_bdevs_discovered": 3, 00:20:36.792 "num_base_bdevs_operational": 3, 00:20:36.792 "base_bdevs_list": [ 00:20:36.792 { 00:20:36.792 "name": "spare", 00:20:36.792 "uuid": "284bf40d-066b-575a-a02d-4c71d3328f3a", 00:20:36.792 "is_configured": true, 00:20:36.792 "data_offset": 0, 00:20:36.792 "data_size": 65536 00:20:36.792 }, 00:20:36.792 { 00:20:36.792 "name": null, 00:20:36.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.792 "is_configured": false, 00:20:36.792 "data_offset": 0, 00:20:36.792 "data_size": 65536 00:20:36.792 }, 00:20:36.792 { 00:20:36.792 "name": "BaseBdev3", 00:20:36.792 "uuid": "518605c5-8f8d-4395-806b-54c957bf955a", 00:20:36.792 "is_configured": true, 00:20:36.792 "data_offset": 0, 00:20:36.792 "data_size": 65536 00:20:36.792 }, 00:20:36.792 { 00:20:36.792 "name": "BaseBdev4", 00:20:36.792 "uuid": "50147f2f-209b-4c70-8c08-0153a8fad57c", 00:20:36.792 "is_configured": true, 00:20:36.792 "data_offset": 0, 00:20:36.792 "data_size": 65536 00:20:36.792 } 00:20:36.792 ] 00:20:36.792 }' 00:20:36.792 21:42:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:36.792 21:42:57 -- common/autotest_common.sh@10 -- # set +x 00:20:37.050 21:42:57 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:37.309 [2024-12-06 21:42:57.617189] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:37.309 [2024-12-06 21:42:57.617225] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:37.309 [2024-12-06 21:42:57.617310] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:37.309 [2024-12-06 21:42:57.617380] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:37.309 [2024-12-06 21:42:57.617395] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:20:37.309 21:42:57 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.309 21:42:57 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:37.567 21:42:57 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:37.567 21:42:57 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:37.567 21:42:57 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@12 -- # local i 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:37.567 21:42:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:37.826 /dev/nbd0 00:20:37.826 21:42:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:37.826 21:42:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:37.826 21:42:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:37.826 21:42:58 -- common/autotest_common.sh@867 -- # local i 00:20:37.826 21:42:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:37.826 21:42:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:37.826 21:42:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:37.826 21:42:58 -- common/autotest_common.sh@871 -- # break 00:20:37.826 21:42:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:37.826 21:42:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:37.826 21:42:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:37.826 1+0 records in 00:20:37.826 1+0 records out 00:20:37.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300927 s, 13.6 MB/s 00:20:37.826 21:42:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.826 21:42:58 -- common/autotest_common.sh@884 -- # size=4096 00:20:37.826 21:42:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:37.826 21:42:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:37.826 21:42:58 -- common/autotest_common.sh@887 -- # return 0 00:20:37.826 21:42:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:37.826 21:42:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:37.826 21:42:58 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:38.085 /dev/nbd1 00:20:38.085 21:42:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:38.085 21:42:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:38.085 21:42:58 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:20:38.085 21:42:58 -- common/autotest_common.sh@867 -- # local i 00:20:38.085 21:42:58 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:38.085 21:42:58 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:38.085 21:42:58 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:20:38.085 21:42:58 -- common/autotest_common.sh@871 -- # break 00:20:38.085 21:42:58 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:38.085 21:42:58 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:38.085 21:42:58 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:38.085 1+0 records in 00:20:38.085 1+0 records out 00:20:38.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459566 s, 8.9 MB/s 00:20:38.085 21:42:58 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.085 21:42:58 -- common/autotest_common.sh@884 -- # size=4096 00:20:38.085 21:42:58 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:38.085 21:42:58 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:38.085 21:42:58 -- common/autotest_common.sh@887 -- # return 0 00:20:38.085 21:42:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:38.085 21:42:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:38.085 21:42:58 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:38.344 21:42:58 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@51 -- # local i 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@41 -- # break 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:38.344 21:42:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:38.604 21:42:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:38.864 21:42:59 -- bdev/nbd_common.sh@41 -- # break 00:20:38.864 21:42:59 -- bdev/nbd_common.sh@45 -- # return 0 00:20:38.864 21:42:59 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:38.864 21:42:59 -- bdev/bdev_raid.sh@709 -- # killprocess 80005 00:20:38.864 21:42:59 -- common/autotest_common.sh@936 -- # '[' -z 80005 ']' 00:20:38.864 21:42:59 -- common/autotest_common.sh@940 -- # kill -0 80005 00:20:38.864 21:42:59 -- common/autotest_common.sh@941 -- # uname 00:20:38.864 21:42:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.864 21:42:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80005 00:20:38.864 killing process with pid 80005 00:20:38.864 Received shutdown signal, test time was about 60.000000 seconds 00:20:38.864 00:20:38.864 Latency(us) 00:20:38.864 [2024-12-06T21:42:59.361Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.864 [2024-12-06T21:42:59.361Z] =================================================================================================================== 00:20:38.864 [2024-12-06T21:42:59.361Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.864 21:42:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:38.864 21:42:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:38.864 21:42:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80005' 00:20:38.864 21:42:59 -- common/autotest_common.sh@955 -- # kill 80005 00:20:38.864 21:42:59 -- common/autotest_common.sh@960 -- # wait 80005 00:20:38.864 [2024-12-06 21:42:59.132888] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:39.124 [2024-12-06 21:42:59.455583] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:40.063 ************************************ 00:20:40.063 END TEST raid_rebuild_test 00:20:40.063 ************************************ 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:40.063 00:20:40.063 real 0m21.615s 00:20:40.063 user 0m27.736s 00:20:40.063 sys 0m4.269s 00:20:40.063 21:43:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:40.063 21:43:00 -- common/autotest_common.sh@10 -- # set +x 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:20:40.063 21:43:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:20:40.063 21:43:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:40.063 21:43:00 -- common/autotest_common.sh@10 -- # set +x 00:20:40.063 ************************************ 00:20:40.063 START TEST raid_rebuild_test_sb 00:20:40.063 ************************************ 00:20:40.063 21:43:00 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true false 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@544 -- # raid_pid=80513 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@545 -- # waitforlisten 80513 /var/tmp/spdk-raid.sock 00:20:40.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:40.063 21:43:00 -- common/autotest_common.sh@829 -- # '[' -z 80513 ']' 00:20:40.063 21:43:00 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:40.063 21:43:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:40.063 21:43:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.063 21:43:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:40.063 21:43:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.063 21:43:00 -- common/autotest_common.sh@10 -- # set +x 00:20:40.323 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:40.323 Zero copy mechanism will not be used. 00:20:40.323 [2024-12-06 21:43:00.585819] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:40.323 [2024-12-06 21:43:00.585997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80513 ] 00:20:40.323 [2024-12-06 21:43:00.750832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.582 [2024-12-06 21:43:00.911045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.582 [2024-12-06 21:43:01.064770] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:41.150 21:43:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.150 21:43:01 -- common/autotest_common.sh@862 -- # return 0 00:20:41.150 21:43:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:41.150 21:43:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:41.150 21:43:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:41.409 BaseBdev1_malloc 00:20:41.409 21:43:01 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:41.409 [2024-12-06 21:43:01.865964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:41.409 [2024-12-06 21:43:01.866054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.409 [2024-12-06 21:43:01.866091] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:20:41.409 [2024-12-06 21:43:01.866107] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.409 [2024-12-06 21:43:01.868595] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.409 [2024-12-06 21:43:01.868764] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:41.409 BaseBdev1 00:20:41.409 21:43:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:41.409 21:43:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:41.409 21:43:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:41.669 BaseBdev2_malloc 00:20:41.669 21:43:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:41.928 [2024-12-06 21:43:02.382471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:41.928 [2024-12-06 21:43:02.382556] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:41.928 [2024-12-06 21:43:02.382595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:20:41.928 [2024-12-06 21:43:02.382613] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:41.928 [2024-12-06 21:43:02.384802] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:41.928 [2024-12-06 21:43:02.384860] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:41.928 BaseBdev2 00:20:41.928 21:43:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:41.928 21:43:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:41.928 21:43:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:20:42.188 BaseBdev3_malloc 00:20:42.188 21:43:02 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:20:42.447 [2024-12-06 21:43:02.848005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:20:42.447 [2024-12-06 21:43:02.848309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.447 [2024-12-06 21:43:02.848350] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:20:42.447 [2024-12-06 21:43:02.848367] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.447 [2024-12-06 21:43:02.850710] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.447 [2024-12-06 21:43:02.850755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:20:42.447 BaseBdev3 00:20:42.447 21:43:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:42.447 21:43:02 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:42.447 21:43:02 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:20:42.706 BaseBdev4_malloc 00:20:42.706 21:43:03 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:20:42.966 [2024-12-06 21:43:03.244562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:20:42.966 [2024-12-06 21:43:03.244852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:42.966 [2024-12-06 21:43:03.245022] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:20:42.966 [2024-12-06 21:43:03.245140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:42.966 [2024-12-06 21:43:03.247907] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:42.966 [2024-12-06 21:43:03.248079] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:20:42.966 BaseBdev4 00:20:42.966 21:43:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:43.225 spare_malloc 00:20:43.225 21:43:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:43.225 spare_delay 00:20:43.225 21:43:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:43.484 [2024-12-06 21:43:03.890515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:43.484 [2024-12-06 21:43:03.890793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:43.484 [2024-12-06 21:43:03.890846] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:20:43.484 [2024-12-06 21:43:03.890863] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:43.484 [2024-12-06 21:43:03.893340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:43.484 [2024-12-06 21:43:03.893401] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:43.484 spare 00:20:43.484 21:43:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:20:43.744 [2024-12-06 21:43:04.074588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.744 [2024-12-06 21:43:04.076548] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:43.744 [2024-12-06 21:43:04.076615] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:20:43.744 [2024-12-06 21:43:04.076679] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:20:43.744 [2024-12-06 21:43:04.076876] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:20:43.744 [2024-12-06 21:43:04.076895] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:43.744 [2024-12-06 21:43:04.077006] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:20:43.744 [2024-12-06 21:43:04.077330] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:20:43.744 [2024-12-06 21:43:04.077345] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:20:43.744 [2024-12-06 21:43:04.077540] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:43.744 21:43:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:44.004 21:43:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:44.004 "name": "raid_bdev1", 00:20:44.004 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:44.005 "strip_size_kb": 0, 00:20:44.005 "state": "online", 00:20:44.005 "raid_level": "raid1", 00:20:44.005 "superblock": true, 00:20:44.005 "num_base_bdevs": 4, 00:20:44.005 "num_base_bdevs_discovered": 4, 00:20:44.005 "num_base_bdevs_operational": 4, 00:20:44.005 "base_bdevs_list": [ 00:20:44.005 { 00:20:44.005 "name": "BaseBdev1", 00:20:44.005 "uuid": "7830b277-70be-5190-847f-236e2e13dd97", 00:20:44.005 "is_configured": true, 00:20:44.005 "data_offset": 2048, 00:20:44.005 "data_size": 63488 00:20:44.005 }, 00:20:44.005 { 00:20:44.005 "name": "BaseBdev2", 00:20:44.005 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:44.005 "is_configured": true, 00:20:44.005 "data_offset": 2048, 00:20:44.005 "data_size": 63488 00:20:44.005 }, 00:20:44.005 { 00:20:44.005 "name": "BaseBdev3", 00:20:44.005 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:44.005 "is_configured": true, 00:20:44.005 "data_offset": 2048, 00:20:44.005 "data_size": 63488 00:20:44.005 }, 00:20:44.005 { 00:20:44.005 "name": "BaseBdev4", 00:20:44.005 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:44.005 "is_configured": true, 00:20:44.005 "data_offset": 2048, 00:20:44.005 "data_size": 63488 00:20:44.005 } 00:20:44.005 ] 00:20:44.005 }' 00:20:44.005 21:43:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:44.005 21:43:04 -- common/autotest_common.sh@10 -- # set +x 00:20:44.264 21:43:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:44.264 21:43:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:44.523 [2024-12-06 21:43:04.779364] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:44.523 21:43:04 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:44.523 21:43:04 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.523 21:43:04 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:44.782 21:43:05 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:44.782 21:43:05 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:44.782 21:43:05 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:44.782 21:43:05 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@12 -- # local i 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.782 21:43:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:45.040 [2024-12-06 21:43:05.315318] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:20:45.040 /dev/nbd0 00:20:45.040 21:43:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:45.040 21:43:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:45.040 21:43:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:20:45.040 21:43:05 -- common/autotest_common.sh@867 -- # local i 00:20:45.040 21:43:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:20:45.040 21:43:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:20:45.040 21:43:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:20:45.040 21:43:05 -- common/autotest_common.sh@871 -- # break 00:20:45.040 21:43:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:20:45.040 21:43:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:20:45.040 21:43:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:45.040 1+0 records in 00:20:45.040 1+0 records out 00:20:45.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292124 s, 14.0 MB/s 00:20:45.040 21:43:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.040 21:43:05 -- common/autotest_common.sh@884 -- # size=4096 00:20:45.041 21:43:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:45.041 21:43:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:20:45.041 21:43:05 -- common/autotest_common.sh@887 -- # return 0 00:20:45.041 21:43:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:45.041 21:43:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:45.041 21:43:05 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:45.041 21:43:05 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:45.041 21:43:05 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:51.645 63488+0 records in 00:20:51.645 63488+0 records out 00:20:51.645 32505856 bytes (33 MB, 31 MiB) copied, 6.7604 s, 4.8 MB/s 00:20:51.645 21:43:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@51 -- # local i 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:51.645 21:43:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:51.904 [2024-12-06 21:43:12.368406] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@41 -- # break 00:20:51.904 21:43:12 -- bdev/nbd_common.sh@45 -- # return 0 00:20:51.904 21:43:12 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:52.163 [2024-12-06 21:43:12.541892] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:52.163 21:43:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.421 21:43:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:52.422 "name": "raid_bdev1", 00:20:52.422 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:52.422 "strip_size_kb": 0, 00:20:52.422 "state": "online", 00:20:52.422 "raid_level": "raid1", 00:20:52.422 "superblock": true, 00:20:52.422 "num_base_bdevs": 4, 00:20:52.422 "num_base_bdevs_discovered": 3, 00:20:52.422 "num_base_bdevs_operational": 3, 00:20:52.422 "base_bdevs_list": [ 00:20:52.422 { 00:20:52.422 "name": null, 00:20:52.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.422 "is_configured": false, 00:20:52.422 "data_offset": 2048, 00:20:52.422 "data_size": 63488 00:20:52.422 }, 00:20:52.422 { 00:20:52.422 "name": "BaseBdev2", 00:20:52.422 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:52.422 "is_configured": true, 00:20:52.422 "data_offset": 2048, 00:20:52.422 "data_size": 63488 00:20:52.422 }, 00:20:52.422 { 00:20:52.422 "name": "BaseBdev3", 00:20:52.422 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:52.422 "is_configured": true, 00:20:52.422 "data_offset": 2048, 00:20:52.422 "data_size": 63488 00:20:52.422 }, 00:20:52.422 { 00:20:52.422 "name": "BaseBdev4", 00:20:52.422 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:52.422 "is_configured": true, 00:20:52.422 "data_offset": 2048, 00:20:52.422 "data_size": 63488 00:20:52.422 } 00:20:52.422 ] 00:20:52.422 }' 00:20:52.422 21:43:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:52.422 21:43:12 -- common/autotest_common.sh@10 -- # set +x 00:20:52.680 21:43:13 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:52.939 [2024-12-06 21:43:13.306180] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:52.939 [2024-12-06 21:43:13.306235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.939 [2024-12-06 21:43:13.316731] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2db0 00:20:52.939 [2024-12-06 21:43:13.318593] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.939 21:43:13 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.876 21:43:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:54.135 "name": "raid_bdev1", 00:20:54.135 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:54.135 "strip_size_kb": 0, 00:20:54.135 "state": "online", 00:20:54.135 "raid_level": "raid1", 00:20:54.135 "superblock": true, 00:20:54.135 "num_base_bdevs": 4, 00:20:54.135 "num_base_bdevs_discovered": 4, 00:20:54.135 "num_base_bdevs_operational": 4, 00:20:54.135 "process": { 00:20:54.135 "type": "rebuild", 00:20:54.135 "target": "spare", 00:20:54.135 "progress": { 00:20:54.135 "blocks": 24576, 00:20:54.135 "percent": 38 00:20:54.135 } 00:20:54.135 }, 00:20:54.135 "base_bdevs_list": [ 00:20:54.135 { 00:20:54.135 "name": "spare", 00:20:54.135 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:54.135 "is_configured": true, 00:20:54.135 "data_offset": 2048, 00:20:54.135 "data_size": 63488 00:20:54.135 }, 00:20:54.135 { 00:20:54.135 "name": "BaseBdev2", 00:20:54.135 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:54.135 "is_configured": true, 00:20:54.135 "data_offset": 2048, 00:20:54.135 "data_size": 63488 00:20:54.135 }, 00:20:54.135 { 00:20:54.135 "name": "BaseBdev3", 00:20:54.135 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:54.135 "is_configured": true, 00:20:54.135 "data_offset": 2048, 00:20:54.135 "data_size": 63488 00:20:54.135 }, 00:20:54.135 { 00:20:54.135 "name": "BaseBdev4", 00:20:54.135 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:54.135 "is_configured": true, 00:20:54.135 "data_offset": 2048, 00:20:54.135 "data_size": 63488 00:20:54.135 } 00:20:54.135 ] 00:20:54.135 }' 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:54.135 21:43:14 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:54.393 [2024-12-06 21:43:14.756760] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:54.393 [2024-12-06 21:43:14.825118] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:54.393 [2024-12-06 21:43:14.825193] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.393 21:43:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.652 21:43:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.652 "name": "raid_bdev1", 00:20:54.652 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:54.652 "strip_size_kb": 0, 00:20:54.652 "state": "online", 00:20:54.652 "raid_level": "raid1", 00:20:54.652 "superblock": true, 00:20:54.652 "num_base_bdevs": 4, 00:20:54.652 "num_base_bdevs_discovered": 3, 00:20:54.652 "num_base_bdevs_operational": 3, 00:20:54.652 "base_bdevs_list": [ 00:20:54.652 { 00:20:54.652 "name": null, 00:20:54.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.652 "is_configured": false, 00:20:54.652 "data_offset": 2048, 00:20:54.652 "data_size": 63488 00:20:54.652 }, 00:20:54.652 { 00:20:54.652 "name": "BaseBdev2", 00:20:54.652 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:54.652 "is_configured": true, 00:20:54.652 "data_offset": 2048, 00:20:54.652 "data_size": 63488 00:20:54.652 }, 00:20:54.652 { 00:20:54.652 "name": "BaseBdev3", 00:20:54.652 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:54.652 "is_configured": true, 00:20:54.652 "data_offset": 2048, 00:20:54.652 "data_size": 63488 00:20:54.652 }, 00:20:54.652 { 00:20:54.652 "name": "BaseBdev4", 00:20:54.652 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:54.652 "is_configured": true, 00:20:54.652 "data_offset": 2048, 00:20:54.652 "data_size": 63488 00:20:54.652 } 00:20:54.652 ] 00:20:54.652 }' 00:20:54.652 21:43:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.652 21:43:15 -- common/autotest_common.sh@10 -- # set +x 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.217 "name": "raid_bdev1", 00:20:55.217 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:55.217 "strip_size_kb": 0, 00:20:55.217 "state": "online", 00:20:55.217 "raid_level": "raid1", 00:20:55.217 "superblock": true, 00:20:55.217 "num_base_bdevs": 4, 00:20:55.217 "num_base_bdevs_discovered": 3, 00:20:55.217 "num_base_bdevs_operational": 3, 00:20:55.217 "base_bdevs_list": [ 00:20:55.217 { 00:20:55.217 "name": null, 00:20:55.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.217 "is_configured": false, 00:20:55.217 "data_offset": 2048, 00:20:55.217 "data_size": 63488 00:20:55.217 }, 00:20:55.217 { 00:20:55.217 "name": "BaseBdev2", 00:20:55.217 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:55.217 "is_configured": true, 00:20:55.217 "data_offset": 2048, 00:20:55.217 "data_size": 63488 00:20:55.217 }, 00:20:55.217 { 00:20:55.217 "name": "BaseBdev3", 00:20:55.217 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:55.217 "is_configured": true, 00:20:55.217 "data_offset": 2048, 00:20:55.217 "data_size": 63488 00:20:55.217 }, 00:20:55.217 { 00:20:55.217 "name": "BaseBdev4", 00:20:55.217 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:55.217 "is_configured": true, 00:20:55.217 "data_offset": 2048, 00:20:55.217 "data_size": 63488 00:20:55.217 } 00:20:55.217 ] 00:20:55.217 }' 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:55.217 21:43:15 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.475 [2024-12-06 21:43:15.800077] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:55.475 [2024-12-06 21:43:15.800128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.475 [2024-12-06 21:43:15.809740] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000ca2e80 00:20:55.475 [2024-12-06 21:43:15.811474] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:55.475 21:43:15 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.406 21:43:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:56.664 "name": "raid_bdev1", 00:20:56.664 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:56.664 "strip_size_kb": 0, 00:20:56.664 "state": "online", 00:20:56.664 "raid_level": "raid1", 00:20:56.664 "superblock": true, 00:20:56.664 "num_base_bdevs": 4, 00:20:56.664 "num_base_bdevs_discovered": 4, 00:20:56.664 "num_base_bdevs_operational": 4, 00:20:56.664 "process": { 00:20:56.664 "type": "rebuild", 00:20:56.664 "target": "spare", 00:20:56.664 "progress": { 00:20:56.664 "blocks": 22528, 00:20:56.664 "percent": 35 00:20:56.664 } 00:20:56.664 }, 00:20:56.664 "base_bdevs_list": [ 00:20:56.664 { 00:20:56.664 "name": "spare", 00:20:56.664 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:56.664 "is_configured": true, 00:20:56.664 "data_offset": 2048, 00:20:56.664 "data_size": 63488 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "name": "BaseBdev2", 00:20:56.664 "uuid": "4633971a-734f-596f-9d73-bebf03b71ee4", 00:20:56.664 "is_configured": true, 00:20:56.664 "data_offset": 2048, 00:20:56.664 "data_size": 63488 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "name": "BaseBdev3", 00:20:56.664 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:56.664 "is_configured": true, 00:20:56.664 "data_offset": 2048, 00:20:56.664 "data_size": 63488 00:20:56.664 }, 00:20:56.664 { 00:20:56.664 "name": "BaseBdev4", 00:20:56.664 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:56.664 "is_configured": true, 00:20:56.664 "data_offset": 2048, 00:20:56.664 "data_size": 63488 00:20:56.664 } 00:20:56.664 ] 00:20:56.664 }' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:56.664 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:20:56.664 21:43:17 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:20:56.922 [2024-12-06 21:43:17.281995] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:56.922 [2024-12-06 21:43:17.317931] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000ca2e80 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.180 21:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.180 "name": "raid_bdev1", 00:20:57.180 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:57.180 "strip_size_kb": 0, 00:20:57.180 "state": "online", 00:20:57.180 "raid_level": "raid1", 00:20:57.180 "superblock": true, 00:20:57.180 "num_base_bdevs": 4, 00:20:57.180 "num_base_bdevs_discovered": 3, 00:20:57.180 "num_base_bdevs_operational": 3, 00:20:57.180 "process": { 00:20:57.180 "type": "rebuild", 00:20:57.180 "target": "spare", 00:20:57.180 "progress": { 00:20:57.180 "blocks": 34816, 00:20:57.180 "percent": 54 00:20:57.180 } 00:20:57.180 }, 00:20:57.180 "base_bdevs_list": [ 00:20:57.180 { 00:20:57.180 "name": "spare", 00:20:57.180 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:57.180 "is_configured": true, 00:20:57.180 "data_offset": 2048, 00:20:57.180 "data_size": 63488 00:20:57.180 }, 00:20:57.180 { 00:20:57.181 "name": null, 00:20:57.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.181 "is_configured": false, 00:20:57.181 "data_offset": 2048, 00:20:57.181 "data_size": 63488 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev3", 00:20:57.181 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 2048, 00:20:57.181 "data_size": 63488 00:20:57.181 }, 00:20:57.181 { 00:20:57.181 "name": "BaseBdev4", 00:20:57.181 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:57.181 "is_configured": true, 00:20:57.181 "data_offset": 2048, 00:20:57.181 "data_size": 63488 00:20:57.181 } 00:20:57.181 ] 00:20:57.181 }' 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@657 -- # local timeout=454 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.181 21:43:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.440 "name": "raid_bdev1", 00:20:57.440 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:57.440 "strip_size_kb": 0, 00:20:57.440 "state": "online", 00:20:57.440 "raid_level": "raid1", 00:20:57.440 "superblock": true, 00:20:57.440 "num_base_bdevs": 4, 00:20:57.440 "num_base_bdevs_discovered": 3, 00:20:57.440 "num_base_bdevs_operational": 3, 00:20:57.440 "process": { 00:20:57.440 "type": "rebuild", 00:20:57.440 "target": "spare", 00:20:57.440 "progress": { 00:20:57.440 "blocks": 40960, 00:20:57.440 "percent": 64 00:20:57.440 } 00:20:57.440 }, 00:20:57.440 "base_bdevs_list": [ 00:20:57.440 { 00:20:57.440 "name": "spare", 00:20:57.440 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:57.440 "is_configured": true, 00:20:57.440 "data_offset": 2048, 00:20:57.440 "data_size": 63488 00:20:57.440 }, 00:20:57.440 { 00:20:57.440 "name": null, 00:20:57.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:57.440 "is_configured": false, 00:20:57.440 "data_offset": 2048, 00:20:57.440 "data_size": 63488 00:20:57.440 }, 00:20:57.440 { 00:20:57.440 "name": "BaseBdev3", 00:20:57.440 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:57.440 "is_configured": true, 00:20:57.440 "data_offset": 2048, 00:20:57.440 "data_size": 63488 00:20:57.440 }, 00:20:57.440 { 00:20:57.440 "name": "BaseBdev4", 00:20:57.440 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:57.440 "is_configured": true, 00:20:57.440 "data_offset": 2048, 00:20:57.440 "data_size": 63488 00:20:57.440 } 00:20:57.440 ] 00:20:57.440 }' 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.440 21:43:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.814 21:43:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.814 [2024-12-06 21:43:18.925020] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:58.814 [2024-12-06 21:43:18.925108] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:58.814 [2024-12-06 21:43:18.925291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.814 21:43:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.814 "name": "raid_bdev1", 00:20:58.814 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:58.814 "strip_size_kb": 0, 00:20:58.814 "state": "online", 00:20:58.814 "raid_level": "raid1", 00:20:58.814 "superblock": true, 00:20:58.814 "num_base_bdevs": 4, 00:20:58.814 "num_base_bdevs_discovered": 3, 00:20:58.814 "num_base_bdevs_operational": 3, 00:20:58.814 "base_bdevs_list": [ 00:20:58.814 { 00:20:58.814 "name": "spare", 00:20:58.814 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:58.814 "is_configured": true, 00:20:58.814 "data_offset": 2048, 00:20:58.814 "data_size": 63488 00:20:58.814 }, 00:20:58.814 { 00:20:58.815 "name": null, 00:20:58.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.815 "is_configured": false, 00:20:58.815 "data_offset": 2048, 00:20:58.815 "data_size": 63488 00:20:58.815 }, 00:20:58.815 { 00:20:58.815 "name": "BaseBdev3", 00:20:58.815 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:58.815 "is_configured": true, 00:20:58.815 "data_offset": 2048, 00:20:58.815 "data_size": 63488 00:20:58.815 }, 00:20:58.815 { 00:20:58.815 "name": "BaseBdev4", 00:20:58.815 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:58.815 "is_configured": true, 00:20:58.815 "data_offset": 2048, 00:20:58.815 "data_size": 63488 00:20:58.815 } 00:20:58.815 ] 00:20:58.815 }' 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@660 -- # break 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.815 21:43:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.073 21:43:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:59.073 "name": "raid_bdev1", 00:20:59.073 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:59.073 "strip_size_kb": 0, 00:20:59.073 "state": "online", 00:20:59.073 "raid_level": "raid1", 00:20:59.073 "superblock": true, 00:20:59.073 "num_base_bdevs": 4, 00:20:59.073 "num_base_bdevs_discovered": 3, 00:20:59.073 "num_base_bdevs_operational": 3, 00:20:59.073 "base_bdevs_list": [ 00:20:59.073 { 00:20:59.073 "name": "spare", 00:20:59.073 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:59.073 "is_configured": true, 00:20:59.074 "data_offset": 2048, 00:20:59.074 "data_size": 63488 00:20:59.074 }, 00:20:59.074 { 00:20:59.074 "name": null, 00:20:59.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.074 "is_configured": false, 00:20:59.074 "data_offset": 2048, 00:20:59.074 "data_size": 63488 00:20:59.074 }, 00:20:59.074 { 00:20:59.074 "name": "BaseBdev3", 00:20:59.074 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:59.074 "is_configured": true, 00:20:59.074 "data_offset": 2048, 00:20:59.074 "data_size": 63488 00:20:59.074 }, 00:20:59.074 { 00:20:59.074 "name": "BaseBdev4", 00:20:59.074 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:59.074 "is_configured": true, 00:20:59.074 "data_offset": 2048, 00:20:59.074 "data_size": 63488 00:20:59.074 } 00:20:59.074 ] 00:20:59.074 }' 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.074 21:43:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.333 21:43:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:59.333 "name": "raid_bdev1", 00:20:59.333 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:20:59.333 "strip_size_kb": 0, 00:20:59.333 "state": "online", 00:20:59.333 "raid_level": "raid1", 00:20:59.333 "superblock": true, 00:20:59.333 "num_base_bdevs": 4, 00:20:59.333 "num_base_bdevs_discovered": 3, 00:20:59.333 "num_base_bdevs_operational": 3, 00:20:59.333 "base_bdevs_list": [ 00:20:59.333 { 00:20:59.333 "name": "spare", 00:20:59.333 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:20:59.333 "is_configured": true, 00:20:59.333 "data_offset": 2048, 00:20:59.333 "data_size": 63488 00:20:59.333 }, 00:20:59.333 { 00:20:59.333 "name": null, 00:20:59.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:59.333 "is_configured": false, 00:20:59.333 "data_offset": 2048, 00:20:59.333 "data_size": 63488 00:20:59.333 }, 00:20:59.333 { 00:20:59.333 "name": "BaseBdev3", 00:20:59.333 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:20:59.333 "is_configured": true, 00:20:59.333 "data_offset": 2048, 00:20:59.333 "data_size": 63488 00:20:59.333 }, 00:20:59.333 { 00:20:59.333 "name": "BaseBdev4", 00:20:59.333 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:20:59.333 "is_configured": true, 00:20:59.333 "data_offset": 2048, 00:20:59.333 "data_size": 63488 00:20:59.333 } 00:20:59.333 ] 00:20:59.333 }' 00:20:59.333 21:43:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:59.333 21:43:19 -- common/autotest_common.sh@10 -- # set +x 00:20:59.592 21:43:20 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:59.851 [2024-12-06 21:43:20.239900] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:59.851 [2024-12-06 21:43:20.239938] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:59.851 [2024-12-06 21:43:20.240016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:59.851 [2024-12-06 21:43:20.240101] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:59.851 [2024-12-06 21:43:20.240141] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:20:59.851 21:43:20 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.851 21:43:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:00.109 21:43:20 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:00.109 21:43:20 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:00.109 21:43:20 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@12 -- # local i 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.109 21:43:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:00.368 /dev/nbd0 00:21:00.368 21:43:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:00.368 21:43:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:00.368 21:43:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:00.368 21:43:20 -- common/autotest_common.sh@867 -- # local i 00:21:00.368 21:43:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:00.368 21:43:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:00.368 21:43:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:00.368 21:43:20 -- common/autotest_common.sh@871 -- # break 00:21:00.368 21:43:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:00.368 21:43:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:00.368 21:43:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.368 1+0 records in 00:21:00.368 1+0 records out 00:21:00.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508092 s, 8.1 MB/s 00:21:00.368 21:43:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.368 21:43:20 -- common/autotest_common.sh@884 -- # size=4096 00:21:00.368 21:43:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.368 21:43:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:00.368 21:43:20 -- common/autotest_common.sh@887 -- # return 0 00:21:00.368 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.368 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.369 21:43:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:00.627 /dev/nbd1 00:21:00.628 21:43:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:00.628 21:43:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:00.628 21:43:20 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:00.628 21:43:20 -- common/autotest_common.sh@867 -- # local i 00:21:00.628 21:43:20 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:00.628 21:43:20 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:00.628 21:43:20 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:00.628 21:43:20 -- common/autotest_common.sh@871 -- # break 00:21:00.628 21:43:20 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:00.628 21:43:20 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:00.628 21:43:20 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:00.628 1+0 records in 00:21:00.628 1+0 records out 00:21:00.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355949 s, 11.5 MB/s 00:21:00.628 21:43:20 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.628 21:43:20 -- common/autotest_common.sh@884 -- # size=4096 00:21:00.628 21:43:20 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:00.628 21:43:20 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:00.628 21:43:20 -- common/autotest_common.sh@887 -- # return 0 00:21:00.628 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:00.628 21:43:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:00.628 21:43:20 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:00.885 21:43:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@51 -- # local i 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@41 -- # break 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@45 -- # return 0 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:00.885 21:43:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@41 -- # break 00:21:01.143 21:43:21 -- bdev/nbd_common.sh@45 -- # return 0 00:21:01.143 21:43:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:01.143 21:43:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:01.143 21:43:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:01.143 21:43:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:01.401 21:43:21 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:01.659 [2024-12-06 21:43:21.960851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:01.659 [2024-12-06 21:43:21.960927] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.659 [2024-12-06 21:43:21.960961] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:21:01.659 [2024-12-06 21:43:21.960975] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.659 [2024-12-06 21:43:21.963349] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.659 [2024-12-06 21:43:21.963387] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:01.659 [2024-12-06 21:43:21.963546] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:01.659 [2024-12-06 21:43:21.963601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:01.659 BaseBdev1 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@696 -- # continue 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:01.659 21:43:21 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:01.917 21:43:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:01.917 [2024-12-06 21:43:22.388914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:01.917 [2024-12-06 21:43:22.389006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:01.917 [2024-12-06 21:43:22.389039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:21:01.917 [2024-12-06 21:43:22.389053] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:01.917 [2024-12-06 21:43:22.389538] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:01.917 [2024-12-06 21:43:22.389571] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:01.917 [2024-12-06 21:43:22.389669] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:01.917 [2024-12-06 21:43:22.389685] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:01.917 [2024-12-06 21:43:22.389698] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:01.917 [2024-12-06 21:43:22.389722] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:21:01.917 [2024-12-06 21:43:22.389801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:01.917 BaseBdev3 00:21:01.917 21:43:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:01.917 21:43:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:01.917 21:43:22 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:02.174 21:43:22 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:02.432 [2024-12-06 21:43:22.745007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:02.432 [2024-12-06 21:43:22.745082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.432 [2024-12-06 21:43:22.745112] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:21:02.432 [2024-12-06 21:43:22.745126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.432 [2024-12-06 21:43:22.745652] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.432 [2024-12-06 21:43:22.745721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:02.432 [2024-12-06 21:43:22.745824] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:02.432 [2024-12-06 21:43:22.745856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:02.432 BaseBdev4 00:21:02.432 21:43:22 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:02.690 21:43:22 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:02.690 [2024-12-06 21:43:23.105059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:02.690 [2024-12-06 21:43:23.105141] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:02.690 [2024-12-06 21:43:23.105171] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:21:02.690 [2024-12-06 21:43:23.105186] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:02.690 [2024-12-06 21:43:23.105701] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:02.690 [2024-12-06 21:43:23.105767] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:02.690 [2024-12-06 21:43:23.105877] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:02.690 [2024-12-06 21:43:23.105910] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:02.690 spare 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:02.690 21:43:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.948 [2024-12-06 21:43:23.206098] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:21:02.948 [2024-12-06 21:43:23.206145] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:02.948 [2024-12-06 21:43:23.206257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000cc1530 00:21:02.948 [2024-12-06 21:43:23.206702] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:21:02.948 [2024-12-06 21:43:23.206728] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:21:02.948 [2024-12-06 21:43:23.206874] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.949 21:43:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.949 "name": "raid_bdev1", 00:21:02.949 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:21:02.949 "strip_size_kb": 0, 00:21:02.949 "state": "online", 00:21:02.949 "raid_level": "raid1", 00:21:02.949 "superblock": true, 00:21:02.949 "num_base_bdevs": 4, 00:21:02.949 "num_base_bdevs_discovered": 3, 00:21:02.949 "num_base_bdevs_operational": 3, 00:21:02.949 "base_bdevs_list": [ 00:21:02.949 { 00:21:02.949 "name": "spare", 00:21:02.949 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:21:02.949 "is_configured": true, 00:21:02.949 "data_offset": 2048, 00:21:02.949 "data_size": 63488 00:21:02.949 }, 00:21:02.949 { 00:21:02.949 "name": null, 00:21:02.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:02.949 "is_configured": false, 00:21:02.949 "data_offset": 2048, 00:21:02.949 "data_size": 63488 00:21:02.949 }, 00:21:02.949 { 00:21:02.949 "name": "BaseBdev3", 00:21:02.949 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:21:02.949 "is_configured": true, 00:21:02.949 "data_offset": 2048, 00:21:02.949 "data_size": 63488 00:21:02.949 }, 00:21:02.949 { 00:21:02.949 "name": "BaseBdev4", 00:21:02.949 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:21:02.949 "is_configured": true, 00:21:02.949 "data_offset": 2048, 00:21:02.949 "data_size": 63488 00:21:02.949 } 00:21:02.949 ] 00:21:02.949 }' 00:21:02.949 21:43:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.949 21:43:23 -- common/autotest_common.sh@10 -- # set +x 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.209 21:43:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:03.467 "name": "raid_bdev1", 00:21:03.467 "uuid": "b9ad6df2-3cac-455c-827b-a1047937ab76", 00:21:03.467 "strip_size_kb": 0, 00:21:03.467 "state": "online", 00:21:03.467 "raid_level": "raid1", 00:21:03.467 "superblock": true, 00:21:03.467 "num_base_bdevs": 4, 00:21:03.467 "num_base_bdevs_discovered": 3, 00:21:03.467 "num_base_bdevs_operational": 3, 00:21:03.467 "base_bdevs_list": [ 00:21:03.467 { 00:21:03.467 "name": "spare", 00:21:03.467 "uuid": "944e0007-0d8c-5257-a4c0-9d7af668120a", 00:21:03.467 "is_configured": true, 00:21:03.467 "data_offset": 2048, 00:21:03.467 "data_size": 63488 00:21:03.467 }, 00:21:03.467 { 00:21:03.467 "name": null, 00:21:03.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:03.467 "is_configured": false, 00:21:03.467 "data_offset": 2048, 00:21:03.467 "data_size": 63488 00:21:03.467 }, 00:21:03.467 { 00:21:03.467 "name": "BaseBdev3", 00:21:03.467 "uuid": "d28ff591-01e8-5c15-ad63-04438d9b978a", 00:21:03.467 "is_configured": true, 00:21:03.467 "data_offset": 2048, 00:21:03.467 "data_size": 63488 00:21:03.467 }, 00:21:03.467 { 00:21:03.467 "name": "BaseBdev4", 00:21:03.467 "uuid": "7879739a-b212-58ef-ab29-922af2286afe", 00:21:03.467 "is_configured": true, 00:21:03.467 "data_offset": 2048, 00:21:03.467 "data_size": 63488 00:21:03.467 } 00:21:03.467 ] 00:21:03.467 }' 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.467 21:43:23 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:03.726 21:43:23 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:03.727 21:43:23 -- bdev/bdev_raid.sh@709 -- # killprocess 80513 00:21:03.727 21:43:23 -- common/autotest_common.sh@936 -- # '[' -z 80513 ']' 00:21:03.727 21:43:23 -- common/autotest_common.sh@940 -- # kill -0 80513 00:21:03.727 21:43:23 -- common/autotest_common.sh@941 -- # uname 00:21:03.727 21:43:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:03.727 21:43:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80513 00:21:03.727 killing process with pid 80513 00:21:03.727 Received shutdown signal, test time was about 60.000000 seconds 00:21:03.727 00:21:03.727 Latency(us) 00:21:03.727 [2024-12-06T21:43:24.224Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.727 [2024-12-06T21:43:24.224Z] =================================================================================================================== 00:21:03.727 [2024-12-06T21:43:24.224Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.727 21:43:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:03.727 21:43:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:03.727 21:43:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80513' 00:21:03.727 21:43:23 -- common/autotest_common.sh@955 -- # kill 80513 00:21:03.727 [2024-12-06 21:43:23.996396] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:03.727 21:43:23 -- common/autotest_common.sh@960 -- # wait 80513 00:21:03.727 [2024-12-06 21:43:23.996528] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.727 [2024-12-06 21:43:23.996616] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.727 [2024-12-06 21:43:23.996635] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:21:03.985 [2024-12-06 21:43:24.306620] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:04.919 ************************************ 00:21:04.919 END TEST raid_rebuild_test_sb 00:21:04.919 ************************************ 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:04.919 00:21:04.919 real 0m24.728s 00:21:04.919 user 0m33.357s 00:21:04.919 sys 0m4.311s 00:21:04.919 21:43:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:04.919 21:43:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:04.919 21:43:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:04.919 21:43:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:04.919 21:43:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.919 ************************************ 00:21:04.919 START TEST raid_rebuild_test_io 00:21:04.919 ************************************ 00:21:04.919 21:43:25 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 false true 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@544 -- # raid_pid=81110 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81110 /var/tmp/spdk-raid.sock 00:21:04.919 21:43:25 -- common/autotest_common.sh@829 -- # '[' -z 81110 ']' 00:21:04.919 21:43:25 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:04.919 21:43:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:04.919 21:43:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.919 21:43:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:04.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:04.919 21:43:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.919 21:43:25 -- common/autotest_common.sh@10 -- # set +x 00:21:04.919 [2024-12-06 21:43:25.368603] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:04.919 [2024-12-06 21:43:25.368786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81110 ] 00:21:04.919 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:04.919 Zero copy mechanism will not be used. 00:21:05.177 [2024-12-06 21:43:25.537463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.435 [2024-12-06 21:43:25.688226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.435 [2024-12-06 21:43:25.829097] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:05.998 21:43:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.998 21:43:26 -- common/autotest_common.sh@862 -- # return 0 00:21:05.998 21:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:05.998 21:43:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:05.998 21:43:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:05.998 BaseBdev1 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:06.255 BaseBdev2 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:06.255 21:43:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:06.512 BaseBdev3 00:21:06.512 21:43:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:06.512 21:43:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:06.512 21:43:26 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:06.770 BaseBdev4 00:21:06.770 21:43:27 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:07.028 spare_malloc 00:21:07.028 21:43:27 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:07.286 spare_delay 00:21:07.286 21:43:27 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:07.286 [2024-12-06 21:43:27.710763] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:07.286 [2024-12-06 21:43:27.710840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:07.286 [2024-12-06 21:43:27.710868] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:21:07.286 [2024-12-06 21:43:27.710884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:07.286 [2024-12-06 21:43:27.713173] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:07.286 [2024-12-06 21:43:27.713232] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:07.286 spare 00:21:07.286 21:43:27 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:07.543 [2024-12-06 21:43:27.882854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:07.543 [2024-12-06 21:43:27.884578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:07.543 [2024-12-06 21:43:27.884634] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:07.543 [2024-12-06 21:43:27.884683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:07.543 [2024-12-06 21:43:27.884749] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:21:07.543 [2024-12-06 21:43:27.884765] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:07.543 [2024-12-06 21:43:27.884926] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:07.543 [2024-12-06 21:43:27.885264] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:21:07.543 [2024-12-06 21:43:27.885296] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:21:07.543 [2024-12-06 21:43:27.885468] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.543 21:43:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.838 21:43:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:07.838 "name": "raid_bdev1", 00:21:07.838 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:07.838 "strip_size_kb": 0, 00:21:07.838 "state": "online", 00:21:07.838 "raid_level": "raid1", 00:21:07.838 "superblock": false, 00:21:07.838 "num_base_bdevs": 4, 00:21:07.838 "num_base_bdevs_discovered": 4, 00:21:07.838 "num_base_bdevs_operational": 4, 00:21:07.838 "base_bdevs_list": [ 00:21:07.838 { 00:21:07.838 "name": "BaseBdev1", 00:21:07.838 "uuid": "6f41ae28-1072-41ff-9058-eed00147d2a4", 00:21:07.838 "is_configured": true, 00:21:07.838 "data_offset": 0, 00:21:07.838 "data_size": 65536 00:21:07.838 }, 00:21:07.838 { 00:21:07.838 "name": "BaseBdev2", 00:21:07.838 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:07.838 "is_configured": true, 00:21:07.838 "data_offset": 0, 00:21:07.838 "data_size": 65536 00:21:07.838 }, 00:21:07.838 { 00:21:07.838 "name": "BaseBdev3", 00:21:07.838 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:07.838 "is_configured": true, 00:21:07.838 "data_offset": 0, 00:21:07.838 "data_size": 65536 00:21:07.838 }, 00:21:07.838 { 00:21:07.838 "name": "BaseBdev4", 00:21:07.838 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:07.838 "is_configured": true, 00:21:07.838 "data_offset": 0, 00:21:07.838 "data_size": 65536 00:21:07.838 } 00:21:07.838 ] 00:21:07.838 }' 00:21:07.838 21:43:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:07.838 21:43:28 -- common/autotest_common.sh@10 -- # set +x 00:21:08.119 21:43:28 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:08.119 21:43:28 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:08.377 [2024-12-06 21:43:28.659308] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:08.377 21:43:28 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:08.635 [2024-12-06 21:43:28.953115] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:08.636 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:08.636 Zero copy mechanism will not be used. 00:21:08.636 Running I/O for 60 seconds... 00:21:08.636 [2024-12-06 21:43:29.035271] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:08.636 [2024-12-06 21:43:29.041412] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:08.636 21:43:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.895 21:43:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:08.895 "name": "raid_bdev1", 00:21:08.895 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:08.895 "strip_size_kb": 0, 00:21:08.895 "state": "online", 00:21:08.895 "raid_level": "raid1", 00:21:08.895 "superblock": false, 00:21:08.895 "num_base_bdevs": 4, 00:21:08.895 "num_base_bdevs_discovered": 3, 00:21:08.895 "num_base_bdevs_operational": 3, 00:21:08.895 "base_bdevs_list": [ 00:21:08.895 { 00:21:08.895 "name": null, 00:21:08.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.895 "is_configured": false, 00:21:08.895 "data_offset": 0, 00:21:08.895 "data_size": 65536 00:21:08.895 }, 00:21:08.895 { 00:21:08.895 "name": "BaseBdev2", 00:21:08.895 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:08.895 "is_configured": true, 00:21:08.895 "data_offset": 0, 00:21:08.895 "data_size": 65536 00:21:08.895 }, 00:21:08.895 { 00:21:08.895 "name": "BaseBdev3", 00:21:08.895 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:08.895 "is_configured": true, 00:21:08.895 "data_offset": 0, 00:21:08.895 "data_size": 65536 00:21:08.895 }, 00:21:08.895 { 00:21:08.895 "name": "BaseBdev4", 00:21:08.895 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:08.895 "is_configured": true, 00:21:08.895 "data_offset": 0, 00:21:08.895 "data_size": 65536 00:21:08.895 } 00:21:08.895 ] 00:21:08.895 }' 00:21:08.895 21:43:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:08.895 21:43:29 -- common/autotest_common.sh@10 -- # set +x 00:21:09.154 21:43:29 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:09.414 [2024-12-06 21:43:29.776720] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:09.414 [2024-12-06 21:43:29.776775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:09.414 [2024-12-06 21:43:29.817790] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:09.414 [2024-12-06 21:43:29.819618] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:09.414 21:43:29 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:09.673 [2024-12-06 21:43:29.943775] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:09.673 [2024-12-06 21:43:29.944236] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:09.673 [2024-12-06 21:43:30.072527] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:09.673 [2024-12-06 21:43:30.073153] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:09.932 [2024-12-06 21:43:30.408914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:09.932 [2024-12-06 21:43:30.409326] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:10.191 [2024-12-06 21:43:30.540344] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:10.191 [2024-12-06 21:43:30.540630] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.450 21:43:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.450 [2024-12-06 21:43:30.869655] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:10.709 21:43:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:10.709 "name": "raid_bdev1", 00:21:10.709 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:10.709 "strip_size_kb": 0, 00:21:10.709 "state": "online", 00:21:10.709 "raid_level": "raid1", 00:21:10.709 "superblock": false, 00:21:10.709 "num_base_bdevs": 4, 00:21:10.709 "num_base_bdevs_discovered": 4, 00:21:10.709 "num_base_bdevs_operational": 4, 00:21:10.709 "process": { 00:21:10.709 "type": "rebuild", 00:21:10.709 "target": "spare", 00:21:10.709 "progress": { 00:21:10.709 "blocks": 14336, 00:21:10.709 "percent": 21 00:21:10.709 } 00:21:10.709 }, 00:21:10.709 "base_bdevs_list": [ 00:21:10.709 { 00:21:10.709 "name": "spare", 00:21:10.709 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:10.709 "is_configured": true, 00:21:10.709 "data_offset": 0, 00:21:10.709 "data_size": 65536 00:21:10.709 }, 00:21:10.709 { 00:21:10.709 "name": "BaseBdev2", 00:21:10.709 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:10.709 "is_configured": true, 00:21:10.709 "data_offset": 0, 00:21:10.709 "data_size": 65536 00:21:10.709 }, 00:21:10.709 { 00:21:10.709 "name": "BaseBdev3", 00:21:10.709 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:10.709 "is_configured": true, 00:21:10.709 "data_offset": 0, 00:21:10.709 "data_size": 65536 00:21:10.709 }, 00:21:10.709 { 00:21:10.709 "name": "BaseBdev4", 00:21:10.709 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:10.709 "is_configured": true, 00:21:10.709 "data_offset": 0, 00:21:10.709 "data_size": 65536 00:21:10.709 } 00:21:10.709 ] 00:21:10.709 }' 00:21:10.709 21:43:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:10.710 21:43:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:10.710 21:43:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:10.710 21:43:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:10.710 21:43:31 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:10.710 [2024-12-06 21:43:31.087706] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:10.710 [2024-12-06 21:43:31.088074] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:10.969 [2024-12-06 21:43:31.317333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:10.969 [2024-12-06 21:43:31.330669] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:10.969 [2024-12-06 21:43:31.331099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:10.969 [2024-12-06 21:43:31.337843] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:10.969 [2024-12-06 21:43:31.347309] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:10.969 [2024-12-06 21:43:31.365398] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:10.969 21:43:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.228 21:43:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.228 "name": "raid_bdev1", 00:21:11.228 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:11.228 "strip_size_kb": 0, 00:21:11.228 "state": "online", 00:21:11.228 "raid_level": "raid1", 00:21:11.228 "superblock": false, 00:21:11.228 "num_base_bdevs": 4, 00:21:11.228 "num_base_bdevs_discovered": 3, 00:21:11.228 "num_base_bdevs_operational": 3, 00:21:11.228 "base_bdevs_list": [ 00:21:11.228 { 00:21:11.228 "name": null, 00:21:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.228 "is_configured": false, 00:21:11.228 "data_offset": 0, 00:21:11.228 "data_size": 65536 00:21:11.228 }, 00:21:11.228 { 00:21:11.228 "name": "BaseBdev2", 00:21:11.228 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:11.228 "is_configured": true, 00:21:11.228 "data_offset": 0, 00:21:11.228 "data_size": 65536 00:21:11.228 }, 00:21:11.228 { 00:21:11.228 "name": "BaseBdev3", 00:21:11.228 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:11.228 "is_configured": true, 00:21:11.228 "data_offset": 0, 00:21:11.228 "data_size": 65536 00:21:11.228 }, 00:21:11.228 { 00:21:11.228 "name": "BaseBdev4", 00:21:11.228 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:11.228 "is_configured": true, 00:21:11.228 "data_offset": 0, 00:21:11.228 "data_size": 65536 00:21:11.228 } 00:21:11.228 ] 00:21:11.228 }' 00:21:11.228 21:43:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.228 21:43:31 -- common/autotest_common.sh@10 -- # set +x 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.488 21:43:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.747 21:43:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:11.747 "name": "raid_bdev1", 00:21:11.747 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:11.747 "strip_size_kb": 0, 00:21:11.747 "state": "online", 00:21:11.747 "raid_level": "raid1", 00:21:11.747 "superblock": false, 00:21:11.747 "num_base_bdevs": 4, 00:21:11.747 "num_base_bdevs_discovered": 3, 00:21:11.747 "num_base_bdevs_operational": 3, 00:21:11.747 "base_bdevs_list": [ 00:21:11.747 { 00:21:11.747 "name": null, 00:21:11.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.747 "is_configured": false, 00:21:11.747 "data_offset": 0, 00:21:11.747 "data_size": 65536 00:21:11.747 }, 00:21:11.747 { 00:21:11.747 "name": "BaseBdev2", 00:21:11.747 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:11.747 "is_configured": true, 00:21:11.747 "data_offset": 0, 00:21:11.747 "data_size": 65536 00:21:11.747 }, 00:21:11.747 { 00:21:11.747 "name": "BaseBdev3", 00:21:11.747 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:11.747 "is_configured": true, 00:21:11.747 "data_offset": 0, 00:21:11.747 "data_size": 65536 00:21:11.747 }, 00:21:11.747 { 00:21:11.747 "name": "BaseBdev4", 00:21:11.747 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:11.747 "is_configured": true, 00:21:11.747 "data_offset": 0, 00:21:11.747 "data_size": 65536 00:21:11.747 } 00:21:11.747 ] 00:21:11.747 }' 00:21:11.747 21:43:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:11.747 21:43:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:11.748 21:43:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:11.748 21:43:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:11.748 21:43:32 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:12.007 [2024-12-06 21:43:32.420147] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:12.007 [2024-12-06 21:43:32.420201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:12.007 21:43:32 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:12.007 [2024-12-06 21:43:32.481675] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:21:12.007 [2024-12-06 21:43:32.483694] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:12.266 [2024-12-06 21:43:32.613921] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.266 [2024-12-06 21:43:32.615099] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:12.525 [2024-12-06 21:43:32.824901] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:12.525 [2024-12-06 21:43:32.825513] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:12.785 [2024-12-06 21:43:33.160142] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:12.785 [2024-12-06 21:43:33.160585] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:13.044 [2024-12-06 21:43:33.383912] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:13.044 [2024-12-06 21:43:33.384563] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.044 21:43:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.303 "name": "raid_bdev1", 00:21:13.303 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:13.303 "strip_size_kb": 0, 00:21:13.303 "state": "online", 00:21:13.303 "raid_level": "raid1", 00:21:13.303 "superblock": false, 00:21:13.303 "num_base_bdevs": 4, 00:21:13.303 "num_base_bdevs_discovered": 4, 00:21:13.303 "num_base_bdevs_operational": 4, 00:21:13.303 "process": { 00:21:13.303 "type": "rebuild", 00:21:13.303 "target": "spare", 00:21:13.303 "progress": { 00:21:13.303 "blocks": 12288, 00:21:13.303 "percent": 18 00:21:13.303 } 00:21:13.303 }, 00:21:13.303 "base_bdevs_list": [ 00:21:13.303 { 00:21:13.303 "name": "spare", 00:21:13.303 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:13.303 "is_configured": true, 00:21:13.303 "data_offset": 0, 00:21:13.303 "data_size": 65536 00:21:13.303 }, 00:21:13.303 { 00:21:13.303 "name": "BaseBdev2", 00:21:13.303 "uuid": "de9356f4-aa62-4e77-b3e3-2a0f96d3b58c", 00:21:13.303 "is_configured": true, 00:21:13.303 "data_offset": 0, 00:21:13.303 "data_size": 65536 00:21:13.303 }, 00:21:13.303 { 00:21:13.303 "name": "BaseBdev3", 00:21:13.303 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:13.303 "is_configured": true, 00:21:13.303 "data_offset": 0, 00:21:13.303 "data_size": 65536 00:21:13.303 }, 00:21:13.303 { 00:21:13.303 "name": "BaseBdev4", 00:21:13.303 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:13.303 "is_configured": true, 00:21:13.303 "data_offset": 0, 00:21:13.303 "data_size": 65536 00:21:13.303 } 00:21:13.303 ] 00:21:13.303 }' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:13.303 21:43:33 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:13.303 [2024-12-06 21:43:33.729179] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:13.562 [2024-12-06 21:43:33.885049] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:13.562 [2024-12-06 21:43:33.954057] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:13.562 [2024-12-06 21:43:34.006847] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:21:13.562 [2024-12-06 21:43:34.006879] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:21:13.562 [2024-12-06 21:43:34.015011] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.562 21:43:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:13.822 [2024-12-06 21:43:34.256995] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:13.822 "name": "raid_bdev1", 00:21:13.822 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:13.822 "strip_size_kb": 0, 00:21:13.822 "state": "online", 00:21:13.822 "raid_level": "raid1", 00:21:13.822 "superblock": false, 00:21:13.822 "num_base_bdevs": 4, 00:21:13.822 "num_base_bdevs_discovered": 3, 00:21:13.822 "num_base_bdevs_operational": 3, 00:21:13.822 "process": { 00:21:13.822 "type": "rebuild", 00:21:13.822 "target": "spare", 00:21:13.822 "progress": { 00:21:13.822 "blocks": 18432, 00:21:13.822 "percent": 28 00:21:13.822 } 00:21:13.822 }, 00:21:13.822 "base_bdevs_list": [ 00:21:13.822 { 00:21:13.822 "name": "spare", 00:21:13.822 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:13.822 "is_configured": true, 00:21:13.822 "data_offset": 0, 00:21:13.822 "data_size": 65536 00:21:13.822 }, 00:21:13.822 { 00:21:13.822 "name": null, 00:21:13.822 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:13.822 "is_configured": false, 00:21:13.822 "data_offset": 0, 00:21:13.822 "data_size": 65536 00:21:13.822 }, 00:21:13.822 { 00:21:13.822 "name": "BaseBdev3", 00:21:13.822 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:13.822 "is_configured": true, 00:21:13.822 "data_offset": 0, 00:21:13.822 "data_size": 65536 00:21:13.822 }, 00:21:13.822 { 00:21:13.822 "name": "BaseBdev4", 00:21:13.822 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:13.822 "is_configured": true, 00:21:13.822 "data_offset": 0, 00:21:13.822 "data_size": 65536 00:21:13.822 } 00:21:13.822 ] 00:21:13.822 }' 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@657 -- # local timeout=471 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:13.822 21:43:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:14.081 [2024-12-06 21:43:34.364054] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:14.081 "name": "raid_bdev1", 00:21:14.081 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:14.081 "strip_size_kb": 0, 00:21:14.081 "state": "online", 00:21:14.081 "raid_level": "raid1", 00:21:14.081 "superblock": false, 00:21:14.081 "num_base_bdevs": 4, 00:21:14.081 "num_base_bdevs_discovered": 3, 00:21:14.081 "num_base_bdevs_operational": 3, 00:21:14.081 "process": { 00:21:14.081 "type": "rebuild", 00:21:14.081 "target": "spare", 00:21:14.081 "progress": { 00:21:14.081 "blocks": 22528, 00:21:14.081 "percent": 34 00:21:14.081 } 00:21:14.081 }, 00:21:14.081 "base_bdevs_list": [ 00:21:14.081 { 00:21:14.081 "name": "spare", 00:21:14.081 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:14.081 "is_configured": true, 00:21:14.081 "data_offset": 0, 00:21:14.081 "data_size": 65536 00:21:14.081 }, 00:21:14.081 { 00:21:14.081 "name": null, 00:21:14.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:14.081 "is_configured": false, 00:21:14.081 "data_offset": 0, 00:21:14.081 "data_size": 65536 00:21:14.081 }, 00:21:14.081 { 00:21:14.081 "name": "BaseBdev3", 00:21:14.081 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:14.081 "is_configured": true, 00:21:14.081 "data_offset": 0, 00:21:14.081 "data_size": 65536 00:21:14.081 }, 00:21:14.081 { 00:21:14.081 "name": "BaseBdev4", 00:21:14.081 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:14.081 "is_configured": true, 00:21:14.081 "data_offset": 0, 00:21:14.081 "data_size": 65536 00:21:14.081 } 00:21:14.081 ] 00:21:14.081 }' 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:14.081 21:43:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:14.340 [2024-12-06 21:43:34.717539] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:14.340 [2024-12-06 21:43:34.718193] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:14.599 [2024-12-06 21:43:34.939104] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:21:14.857 [2024-12-06 21:43:35.274884] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:21:15.116 [2024-12-06 21:43:35.497661] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:15.116 21:43:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:15.374 "name": "raid_bdev1", 00:21:15.374 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:15.374 "strip_size_kb": 0, 00:21:15.374 "state": "online", 00:21:15.374 "raid_level": "raid1", 00:21:15.374 "superblock": false, 00:21:15.374 "num_base_bdevs": 4, 00:21:15.374 "num_base_bdevs_discovered": 3, 00:21:15.374 "num_base_bdevs_operational": 3, 00:21:15.374 "process": { 00:21:15.374 "type": "rebuild", 00:21:15.374 "target": "spare", 00:21:15.374 "progress": { 00:21:15.374 "blocks": 36864, 00:21:15.374 "percent": 56 00:21:15.374 } 00:21:15.374 }, 00:21:15.374 "base_bdevs_list": [ 00:21:15.374 { 00:21:15.374 "name": "spare", 00:21:15.374 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:15.374 "is_configured": true, 00:21:15.374 "data_offset": 0, 00:21:15.374 "data_size": 65536 00:21:15.374 }, 00:21:15.374 { 00:21:15.374 "name": null, 00:21:15.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:15.374 "is_configured": false, 00:21:15.374 "data_offset": 0, 00:21:15.374 "data_size": 65536 00:21:15.374 }, 00:21:15.374 { 00:21:15.374 "name": "BaseBdev3", 00:21:15.374 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:15.374 "is_configured": true, 00:21:15.374 "data_offset": 0, 00:21:15.374 "data_size": 65536 00:21:15.374 }, 00:21:15.374 { 00:21:15.374 "name": "BaseBdev4", 00:21:15.374 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:15.374 "is_configured": true, 00:21:15.374 "data_offset": 0, 00:21:15.374 "data_size": 65536 00:21:15.374 } 00:21:15.374 ] 00:21:15.374 }' 00:21:15.374 [2024-12-06 21:43:35.828544] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:15.374 21:43:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:15.632 [2024-12-06 21:43:35.944811] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:21:16.200 [2024-12-06 21:43:36.577847] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:21:16.460 [2024-12-06 21:43:36.806573] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:16.460 [2024-12-06 21:43:36.807131] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:16.460 21:43:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:16.720 "name": "raid_bdev1", 00:21:16.720 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:16.720 "strip_size_kb": 0, 00:21:16.720 "state": "online", 00:21:16.720 "raid_level": "raid1", 00:21:16.720 "superblock": false, 00:21:16.720 "num_base_bdevs": 4, 00:21:16.720 "num_base_bdevs_discovered": 3, 00:21:16.720 "num_base_bdevs_operational": 3, 00:21:16.720 "process": { 00:21:16.720 "type": "rebuild", 00:21:16.720 "target": "spare", 00:21:16.720 "progress": { 00:21:16.720 "blocks": 55296, 00:21:16.720 "percent": 84 00:21:16.720 } 00:21:16.720 }, 00:21:16.720 "base_bdevs_list": [ 00:21:16.720 { 00:21:16.720 "name": "spare", 00:21:16.720 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:16.720 "is_configured": true, 00:21:16.720 "data_offset": 0, 00:21:16.720 "data_size": 65536 00:21:16.720 }, 00:21:16.720 { 00:21:16.720 "name": null, 00:21:16.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:16.720 "is_configured": false, 00:21:16.720 "data_offset": 0, 00:21:16.720 "data_size": 65536 00:21:16.720 }, 00:21:16.720 { 00:21:16.720 "name": "BaseBdev3", 00:21:16.720 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:16.720 "is_configured": true, 00:21:16.720 "data_offset": 0, 00:21:16.720 "data_size": 65536 00:21:16.720 }, 00:21:16.720 { 00:21:16.720 "name": "BaseBdev4", 00:21:16.720 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:16.720 "is_configured": true, 00:21:16.720 "data_offset": 0, 00:21:16.720 "data_size": 65536 00:21:16.720 } 00:21:16.720 ] 00:21:16.720 }' 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:16.720 21:43:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:16.720 [2024-12-06 21:43:37.139634] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:17.289 [2024-12-06 21:43:37.680475] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:17.289 [2024-12-06 21:43:37.780526] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:17.548 [2024-12-06 21:43:37.790452] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:17.807 21:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.067 "name": "raid_bdev1", 00:21:18.067 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:18.067 "strip_size_kb": 0, 00:21:18.067 "state": "online", 00:21:18.067 "raid_level": "raid1", 00:21:18.067 "superblock": false, 00:21:18.067 "num_base_bdevs": 4, 00:21:18.067 "num_base_bdevs_discovered": 3, 00:21:18.067 "num_base_bdevs_operational": 3, 00:21:18.067 "base_bdevs_list": [ 00:21:18.067 { 00:21:18.067 "name": "spare", 00:21:18.067 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:18.067 "is_configured": true, 00:21:18.067 "data_offset": 0, 00:21:18.067 "data_size": 65536 00:21:18.067 }, 00:21:18.067 { 00:21:18.067 "name": null, 00:21:18.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.067 "is_configured": false, 00:21:18.067 "data_offset": 0, 00:21:18.067 "data_size": 65536 00:21:18.067 }, 00:21:18.067 { 00:21:18.067 "name": "BaseBdev3", 00:21:18.067 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:18.067 "is_configured": true, 00:21:18.067 "data_offset": 0, 00:21:18.067 "data_size": 65536 00:21:18.067 }, 00:21:18.067 { 00:21:18.067 "name": "BaseBdev4", 00:21:18.067 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:18.067 "is_configured": true, 00:21:18.067 "data_offset": 0, 00:21:18.067 "data_size": 65536 00:21:18.067 } 00:21:18.067 ] 00:21:18.067 }' 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@660 -- # break 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.067 21:43:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:18.327 "name": "raid_bdev1", 00:21:18.327 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:18.327 "strip_size_kb": 0, 00:21:18.327 "state": "online", 00:21:18.327 "raid_level": "raid1", 00:21:18.327 "superblock": false, 00:21:18.327 "num_base_bdevs": 4, 00:21:18.327 "num_base_bdevs_discovered": 3, 00:21:18.327 "num_base_bdevs_operational": 3, 00:21:18.327 "base_bdevs_list": [ 00:21:18.327 { 00:21:18.327 "name": "spare", 00:21:18.327 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:18.327 "is_configured": true, 00:21:18.327 "data_offset": 0, 00:21:18.327 "data_size": 65536 00:21:18.327 }, 00:21:18.327 { 00:21:18.327 "name": null, 00:21:18.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.327 "is_configured": false, 00:21:18.327 "data_offset": 0, 00:21:18.327 "data_size": 65536 00:21:18.327 }, 00:21:18.327 { 00:21:18.327 "name": "BaseBdev3", 00:21:18.327 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:18.327 "is_configured": true, 00:21:18.327 "data_offset": 0, 00:21:18.327 "data_size": 65536 00:21:18.327 }, 00:21:18.327 { 00:21:18.327 "name": "BaseBdev4", 00:21:18.327 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:18.327 "is_configured": true, 00:21:18.327 "data_offset": 0, 00:21:18.327 "data_size": 65536 00:21:18.327 } 00:21:18.327 ] 00:21:18.327 }' 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.327 21:43:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.586 21:43:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.586 "name": "raid_bdev1", 00:21:18.586 "uuid": "b0937b44-7618-4070-a8cb-f107d4692629", 00:21:18.586 "strip_size_kb": 0, 00:21:18.586 "state": "online", 00:21:18.586 "raid_level": "raid1", 00:21:18.586 "superblock": false, 00:21:18.586 "num_base_bdevs": 4, 00:21:18.586 "num_base_bdevs_discovered": 3, 00:21:18.586 "num_base_bdevs_operational": 3, 00:21:18.586 "base_bdevs_list": [ 00:21:18.586 { 00:21:18.586 "name": "spare", 00:21:18.586 "uuid": "7499ca33-031c-5b0c-a07c-703ec6e15262", 00:21:18.586 "is_configured": true, 00:21:18.586 "data_offset": 0, 00:21:18.587 "data_size": 65536 00:21:18.587 }, 00:21:18.587 { 00:21:18.587 "name": null, 00:21:18.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.587 "is_configured": false, 00:21:18.587 "data_offset": 0, 00:21:18.587 "data_size": 65536 00:21:18.587 }, 00:21:18.587 { 00:21:18.587 "name": "BaseBdev3", 00:21:18.587 "uuid": "0041093f-e9ac-4333-b489-138a2a99dc16", 00:21:18.587 "is_configured": true, 00:21:18.587 "data_offset": 0, 00:21:18.587 "data_size": 65536 00:21:18.587 }, 00:21:18.587 { 00:21:18.587 "name": "BaseBdev4", 00:21:18.587 "uuid": "efed4e7f-3aed-4fd8-98be-7ae15d396e9b", 00:21:18.587 "is_configured": true, 00:21:18.587 "data_offset": 0, 00:21:18.587 "data_size": 65536 00:21:18.587 } 00:21:18.587 ] 00:21:18.587 }' 00:21:18.587 21:43:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.587 21:43:38 -- common/autotest_common.sh@10 -- # set +x 00:21:18.847 21:43:39 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:18.847 [2024-12-06 21:43:39.341144] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:18.847 [2024-12-06 21:43:39.341181] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:19.106 00:21:19.106 Latency(us) 00:21:19.106 [2024-12-06T21:43:39.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.106 [2024-12-06T21:43:39.603Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:19.106 raid_bdev1 : 10.47 96.03 288.08 0.00 0.00 14244.36 251.35 113913.48 00:21:19.106 [2024-12-06T21:43:39.603Z] =================================================================================================================== 00:21:19.106 [2024-12-06T21:43:39.603Z] Total : 96.03 288.08 0.00 0.00 14244.36 251.35 113913.48 00:21:19.106 [2024-12-06 21:43:39.436864] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:19.106 [2024-12-06 21:43:39.437049] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:19.106 0 00:21:19.106 [2024-12-06 21:43:39.437177] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:19.106 [2024-12-06 21:43:39.437354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:21:19.106 21:43:39 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:19.106 21:43:39 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@12 -- # local i 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:19.365 /dev/nbd0 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:19.365 21:43:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:19.365 21:43:39 -- common/autotest_common.sh@867 -- # local i 00:21:19.365 21:43:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:19.365 21:43:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:19.365 21:43:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:19.365 21:43:39 -- common/autotest_common.sh@871 -- # break 00:21:19.365 21:43:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:19.365 21:43:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:19.365 21:43:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:19.365 1+0 records in 00:21:19.365 1+0 records out 00:21:19.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423442 s, 9.7 MB/s 00:21:19.365 21:43:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.365 21:43:39 -- common/autotest_common.sh@884 -- # size=4096 00:21:19.365 21:43:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.365 21:43:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:19.365 21:43:39 -- common/autotest_common.sh@887 -- # return 0 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@678 -- # continue 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:19.365 21:43:39 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@12 -- # local i 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:19.365 21:43:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:19.624 /dev/nbd1 00:21:19.624 21:43:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:19.624 21:43:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:19.624 21:43:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:19.624 21:43:40 -- common/autotest_common.sh@867 -- # local i 00:21:19.624 21:43:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:19.624 21:43:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:19.624 21:43:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:19.624 21:43:40 -- common/autotest_common.sh@871 -- # break 00:21:19.624 21:43:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:19.624 21:43:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:19.624 21:43:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:19.624 1+0 records in 00:21:19.624 1+0 records out 00:21:19.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248133 s, 16.5 MB/s 00:21:19.624 21:43:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.624 21:43:40 -- common/autotest_common.sh@884 -- # size=4096 00:21:19.624 21:43:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:19.624 21:43:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:19.624 21:43:40 -- common/autotest_common.sh@887 -- # return 0 00:21:19.624 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:19.624 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:19.624 21:43:40 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:19.883 21:43:40 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@51 -- # local i 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:19.883 21:43:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@41 -- # break 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.142 21:43:40 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:20.142 21:43:40 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:20.142 21:43:40 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@12 -- # local i 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.142 21:43:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:20.401 /dev/nbd1 00:21:20.401 21:43:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:20.401 21:43:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:20.401 21:43:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:20.401 21:43:40 -- common/autotest_common.sh@867 -- # local i 00:21:20.401 21:43:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:20.401 21:43:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:20.401 21:43:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:20.401 21:43:40 -- common/autotest_common.sh@871 -- # break 00:21:20.401 21:43:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:20.401 21:43:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:20.401 21:43:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:20.401 1+0 records in 00:21:20.401 1+0 records out 00:21:20.401 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283057 s, 14.5 MB/s 00:21:20.401 21:43:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.401 21:43:40 -- common/autotest_common.sh@884 -- # size=4096 00:21:20.401 21:43:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:20.401 21:43:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:20.401 21:43:40 -- common/autotest_common.sh@887 -- # return 0 00:21:20.401 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:20.401 21:43:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:20.401 21:43:40 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:20.663 21:43:40 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@51 -- # local i 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.663 21:43:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@41 -- # break 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.663 21:43:41 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@51 -- # local i 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:20.663 21:43:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@41 -- # break 00:21:20.920 21:43:41 -- bdev/nbd_common.sh@45 -- # return 0 00:21:20.920 21:43:41 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:20.920 21:43:41 -- bdev/bdev_raid.sh@709 -- # killprocess 81110 00:21:20.920 21:43:41 -- common/autotest_common.sh@936 -- # '[' -z 81110 ']' 00:21:20.920 21:43:41 -- common/autotest_common.sh@940 -- # kill -0 81110 00:21:20.920 21:43:41 -- common/autotest_common.sh@941 -- # uname 00:21:20.920 21:43:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:20.920 21:43:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81110 00:21:20.920 killing process with pid 81110 00:21:20.920 Received shutdown signal, test time was about 12.374121 seconds 00:21:20.920 00:21:20.920 Latency(us) 00:21:20.920 [2024-12-06T21:43:41.417Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.920 [2024-12-06T21:43:41.417Z] =================================================================================================================== 00:21:20.920 [2024-12-06T21:43:41.417Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:20.920 21:43:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:20.920 21:43:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:20.920 21:43:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81110' 00:21:20.920 21:43:41 -- common/autotest_common.sh@955 -- # kill 81110 00:21:20.920 [2024-12-06 21:43:41.329193] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:20.920 21:43:41 -- common/autotest_common.sh@960 -- # wait 81110 00:21:21.178 [2024-12-06 21:43:41.602372] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:22.113 ************************************ 00:21:22.113 END TEST raid_rebuild_test_io 00:21:22.113 ************************************ 00:21:22.113 21:43:42 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:22.113 00:21:22.113 real 0m17.255s 00:21:22.113 user 0m24.762s 00:21:22.113 sys 0m2.238s 00:21:22.113 21:43:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:22.113 21:43:42 -- common/autotest_common.sh@10 -- # set +x 00:21:22.113 21:43:42 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:21:22.113 21:43:42 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:21:22.113 21:43:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.113 21:43:42 -- common/autotest_common.sh@10 -- # set +x 00:21:22.371 ************************************ 00:21:22.371 START TEST raid_rebuild_test_sb_io 00:21:22.371 ************************************ 00:21:22.371 21:43:42 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid1 4 true true 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:22.371 21:43:42 -- bdev/bdev_raid.sh@544 -- # raid_pid=81585 00:21:22.372 21:43:42 -- bdev/bdev_raid.sh@545 -- # waitforlisten 81585 /var/tmp/spdk-raid.sock 00:21:22.372 21:43:42 -- common/autotest_common.sh@829 -- # '[' -z 81585 ']' 00:21:22.372 21:43:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:22.372 21:43:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:22.372 21:43:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:22.372 21:43:42 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:22.372 21:43:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.372 21:43:42 -- common/autotest_common.sh@10 -- # set +x 00:21:22.372 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:22.372 Zero copy mechanism will not be used. 00:21:22.372 [2024-12-06 21:43:42.676351] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:22.372 [2024-12-06 21:43:42.676543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81585 ] 00:21:22.372 [2024-12-06 21:43:42.847303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.630 [2024-12-06 21:43:43.004053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.890 [2024-12-06 21:43:43.146454] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:23.149 21:43:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.149 21:43:43 -- common/autotest_common.sh@862 -- # return 0 00:21:23.149 21:43:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:23.149 21:43:43 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:23.149 21:43:43 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:23.409 BaseBdev1_malloc 00:21:23.409 21:43:43 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:23.668 [2024-12-06 21:43:44.011925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:23.668 [2024-12-06 21:43:44.012036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.668 [2024-12-06 21:43:44.012069] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:21:23.668 [2024-12-06 21:43:44.012086] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.668 [2024-12-06 21:43:44.014838] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.668 [2024-12-06 21:43:44.014885] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:23.668 BaseBdev1 00:21:23.668 21:43:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:23.668 21:43:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:23.668 21:43:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:23.927 BaseBdev2_malloc 00:21:23.927 21:43:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:23.927 [2024-12-06 21:43:44.404623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:23.927 [2024-12-06 21:43:44.404886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:23.927 [2024-12-06 21:43:44.404931] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:21:23.927 [2024-12-06 21:43:44.404952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:23.927 [2024-12-06 21:43:44.407236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:23.927 [2024-12-06 21:43:44.407280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:23.927 BaseBdev2 00:21:23.927 21:43:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:23.927 21:43:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:23.927 21:43:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:24.496 BaseBdev3_malloc 00:21:24.496 21:43:44 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:24.496 [2024-12-06 21:43:44.861374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:24.496 [2024-12-06 21:43:44.861480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:24.496 [2024-12-06 21:43:44.861526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:21:24.496 [2024-12-06 21:43:44.861544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:24.496 [2024-12-06 21:43:44.863661] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:24.496 [2024-12-06 21:43:44.863889] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:24.496 BaseBdev3 00:21:24.496 21:43:44 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:24.496 21:43:44 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:24.496 21:43:44 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:24.754 BaseBdev4_malloc 00:21:24.754 21:43:45 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:25.012 [2024-12-06 21:43:45.253526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:25.012 [2024-12-06 21:43:45.253863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.012 [2024-12-06 21:43:45.254039] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:21:25.012 [2024-12-06 21:43:45.254230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.012 [2024-12-06 21:43:45.256769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.012 [2024-12-06 21:43:45.256968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:25.012 BaseBdev4 00:21:25.013 21:43:45 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:25.271 spare_malloc 00:21:25.271 21:43:45 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:25.271 spare_delay 00:21:25.271 21:43:45 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:25.529 [2024-12-06 21:43:45.867413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:25.529 [2024-12-06 21:43:45.867511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:25.529 [2024-12-06 21:43:45.867542] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:21:25.529 [2024-12-06 21:43:45.867559] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:25.529 [2024-12-06 21:43:45.869707] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:25.529 [2024-12-06 21:43:45.869959] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:25.529 spare 00:21:25.529 21:43:45 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:25.787 [2024-12-06 21:43:46.055521] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:25.787 [2024-12-06 21:43:46.057461] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:25.787 [2024-12-06 21:43:46.057653] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:25.787 [2024-12-06 21:43:46.057764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:25.787 [2024-12-06 21:43:46.058055] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:21:25.787 [2024-12-06 21:43:46.058117] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:25.787 [2024-12-06 21:43:46.058327] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:21:25.787 [2024-12-06 21:43:46.058831] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:21:25.787 [2024-12-06 21:43:46.059011] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:21:25.787 [2024-12-06 21:43:46.059269] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:25.787 21:43:46 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:25.788 "name": "raid_bdev1", 00:21:25.788 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:25.788 "strip_size_kb": 0, 00:21:25.788 "state": "online", 00:21:25.788 "raid_level": "raid1", 00:21:25.788 "superblock": true, 00:21:25.788 "num_base_bdevs": 4, 00:21:25.788 "num_base_bdevs_discovered": 4, 00:21:25.788 "num_base_bdevs_operational": 4, 00:21:25.788 "base_bdevs_list": [ 00:21:25.788 { 00:21:25.788 "name": "BaseBdev1", 00:21:25.788 "uuid": "661a075c-c836-54a3-9c16-e9477acac407", 00:21:25.788 "is_configured": true, 00:21:25.788 "data_offset": 2048, 00:21:25.788 "data_size": 63488 00:21:25.788 }, 00:21:25.788 { 00:21:25.788 "name": "BaseBdev2", 00:21:25.788 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:25.788 "is_configured": true, 00:21:25.788 "data_offset": 2048, 00:21:25.788 "data_size": 63488 00:21:25.788 }, 00:21:25.788 { 00:21:25.788 "name": "BaseBdev3", 00:21:25.788 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:25.788 "is_configured": true, 00:21:25.788 "data_offset": 2048, 00:21:25.788 "data_size": 63488 00:21:25.788 }, 00:21:25.788 { 00:21:25.788 "name": "BaseBdev4", 00:21:25.788 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:25.788 "is_configured": true, 00:21:25.788 "data_offset": 2048, 00:21:25.788 "data_size": 63488 00:21:25.788 } 00:21:25.788 ] 00:21:25.788 }' 00:21:25.788 21:43:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:25.788 21:43:46 -- common/autotest_common.sh@10 -- # set +x 00:21:26.046 21:43:46 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:26.046 21:43:46 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:26.419 [2024-12-06 21:43:46.756017] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:26.419 21:43:46 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:26.419 21:43:46 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.419 21:43:46 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:26.699 21:43:47 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:26.699 21:43:47 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:21:26.699 21:43:47 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:21:26.699 21:43:47 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:26.699 [2024-12-06 21:43:47.157613] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:21:26.699 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:26.699 Zero copy mechanism will not be used. 00:21:26.699 Running I/O for 60 seconds... 00:21:26.958 [2024-12-06 21:43:47.212668] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:26.958 [2024-12-06 21:43:47.218919] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.958 21:43:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.216 21:43:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.216 "name": "raid_bdev1", 00:21:27.216 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:27.216 "strip_size_kb": 0, 00:21:27.216 "state": "online", 00:21:27.216 "raid_level": "raid1", 00:21:27.216 "superblock": true, 00:21:27.216 "num_base_bdevs": 4, 00:21:27.216 "num_base_bdevs_discovered": 3, 00:21:27.216 "num_base_bdevs_operational": 3, 00:21:27.216 "base_bdevs_list": [ 00:21:27.216 { 00:21:27.216 "name": null, 00:21:27.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.216 "is_configured": false, 00:21:27.216 "data_offset": 2048, 00:21:27.216 "data_size": 63488 00:21:27.216 }, 00:21:27.216 { 00:21:27.216 "name": "BaseBdev2", 00:21:27.216 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:27.216 "is_configured": true, 00:21:27.216 "data_offset": 2048, 00:21:27.216 "data_size": 63488 00:21:27.216 }, 00:21:27.216 { 00:21:27.216 "name": "BaseBdev3", 00:21:27.216 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:27.216 "is_configured": true, 00:21:27.216 "data_offset": 2048, 00:21:27.216 "data_size": 63488 00:21:27.216 }, 00:21:27.216 { 00:21:27.216 "name": "BaseBdev4", 00:21:27.216 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:27.216 "is_configured": true, 00:21:27.216 "data_offset": 2048, 00:21:27.216 "data_size": 63488 00:21:27.216 } 00:21:27.216 ] 00:21:27.216 }' 00:21:27.216 21:43:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.216 21:43:47 -- common/autotest_common.sh@10 -- # set +x 00:21:27.474 21:43:47 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:27.733 [2024-12-06 21:43:48.051005] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:27.733 [2024-12-06 21:43:48.051064] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:27.733 21:43:48 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:27.733 [2024-12-06 21:43:48.100349] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:21:27.733 [2024-12-06 21:43:48.102483] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:27.992 [2024-12-06 21:43:48.249008] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:27.992 [2024-12-06 21:43:48.393687] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:27.992 [2024-12-06 21:43:48.393959] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:28.251 [2024-12-06 21:43:48.625828] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:28.509 [2024-12-06 21:43:48.847841] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:28.509 [2024-12-06 21:43:48.848613] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:28.768 21:43:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:28.768 [2024-12-06 21:43:49.173483] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:29.026 "name": "raid_bdev1", 00:21:29.026 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:29.026 "strip_size_kb": 0, 00:21:29.026 "state": "online", 00:21:29.026 "raid_level": "raid1", 00:21:29.026 "superblock": true, 00:21:29.026 "num_base_bdevs": 4, 00:21:29.026 "num_base_bdevs_discovered": 4, 00:21:29.026 "num_base_bdevs_operational": 4, 00:21:29.026 "process": { 00:21:29.026 "type": "rebuild", 00:21:29.026 "target": "spare", 00:21:29.026 "progress": { 00:21:29.026 "blocks": 14336, 00:21:29.026 "percent": 22 00:21:29.026 } 00:21:29.026 }, 00:21:29.026 "base_bdevs_list": [ 00:21:29.026 { 00:21:29.026 "name": "spare", 00:21:29.026 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:29.026 "is_configured": true, 00:21:29.026 "data_offset": 2048, 00:21:29.026 "data_size": 63488 00:21:29.026 }, 00:21:29.026 { 00:21:29.026 "name": "BaseBdev2", 00:21:29.026 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:29.026 "is_configured": true, 00:21:29.026 "data_offset": 2048, 00:21:29.026 "data_size": 63488 00:21:29.026 }, 00:21:29.026 { 00:21:29.026 "name": "BaseBdev3", 00:21:29.026 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:29.026 "is_configured": true, 00:21:29.026 "data_offset": 2048, 00:21:29.026 "data_size": 63488 00:21:29.026 }, 00:21:29.026 { 00:21:29.026 "name": "BaseBdev4", 00:21:29.026 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:29.026 "is_configured": true, 00:21:29.026 "data_offset": 2048, 00:21:29.026 "data_size": 63488 00:21:29.026 } 00:21:29.026 ] 00:21:29.026 }' 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:29.026 21:43:49 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:29.026 [2024-12-06 21:43:49.391224] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:29.284 [2024-12-06 21:43:49.565430] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:29.284 [2024-12-06 21:43:49.612025] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:21:29.284 [2024-12-06 21:43:49.719608] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:29.284 [2024-12-06 21:43:49.729666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:29.284 [2024-12-06 21:43:49.754050] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x50d000005930 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:29.284 21:43:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:29.543 "name": "raid_bdev1", 00:21:29.543 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:29.543 "strip_size_kb": 0, 00:21:29.543 "state": "online", 00:21:29.543 "raid_level": "raid1", 00:21:29.543 "superblock": true, 00:21:29.543 "num_base_bdevs": 4, 00:21:29.543 "num_base_bdevs_discovered": 3, 00:21:29.543 "num_base_bdevs_operational": 3, 00:21:29.543 "base_bdevs_list": [ 00:21:29.543 { 00:21:29.543 "name": null, 00:21:29.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:29.543 "is_configured": false, 00:21:29.543 "data_offset": 2048, 00:21:29.543 "data_size": 63488 00:21:29.543 }, 00:21:29.543 { 00:21:29.543 "name": "BaseBdev2", 00:21:29.543 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:29.543 "is_configured": true, 00:21:29.543 "data_offset": 2048, 00:21:29.543 "data_size": 63488 00:21:29.543 }, 00:21:29.543 { 00:21:29.543 "name": "BaseBdev3", 00:21:29.543 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:29.543 "is_configured": true, 00:21:29.543 "data_offset": 2048, 00:21:29.543 "data_size": 63488 00:21:29.543 }, 00:21:29.543 { 00:21:29.543 "name": "BaseBdev4", 00:21:29.543 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:29.543 "is_configured": true, 00:21:29.543 "data_offset": 2048, 00:21:29.543 "data_size": 63488 00:21:29.543 } 00:21:29.543 ] 00:21:29.543 }' 00:21:29.543 21:43:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:29.543 21:43:49 -- common/autotest_common.sh@10 -- # set +x 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:29.801 21:43:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:30.059 "name": "raid_bdev1", 00:21:30.059 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:30.059 "strip_size_kb": 0, 00:21:30.059 "state": "online", 00:21:30.059 "raid_level": "raid1", 00:21:30.059 "superblock": true, 00:21:30.059 "num_base_bdevs": 4, 00:21:30.059 "num_base_bdevs_discovered": 3, 00:21:30.059 "num_base_bdevs_operational": 3, 00:21:30.059 "base_bdevs_list": [ 00:21:30.059 { 00:21:30.059 "name": null, 00:21:30.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:30.059 "is_configured": false, 00:21:30.059 "data_offset": 2048, 00:21:30.059 "data_size": 63488 00:21:30.059 }, 00:21:30.059 { 00:21:30.059 "name": "BaseBdev2", 00:21:30.059 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:30.059 "is_configured": true, 00:21:30.059 "data_offset": 2048, 00:21:30.059 "data_size": 63488 00:21:30.059 }, 00:21:30.059 { 00:21:30.059 "name": "BaseBdev3", 00:21:30.059 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:30.059 "is_configured": true, 00:21:30.059 "data_offset": 2048, 00:21:30.059 "data_size": 63488 00:21:30.059 }, 00:21:30.059 { 00:21:30.059 "name": "BaseBdev4", 00:21:30.059 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:30.059 "is_configured": true, 00:21:30.059 "data_offset": 2048, 00:21:30.059 "data_size": 63488 00:21:30.059 } 00:21:30.059 ] 00:21:30.059 }' 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:30.059 21:43:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:30.323 [2024-12-06 21:43:50.652547] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:30.323 [2024-12-06 21:43:50.652886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:30.323 21:43:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:30.323 [2024-12-06 21:43:50.707063] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005ad0 00:21:30.323 [2024-12-06 21:43:50.709042] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:30.583 [2024-12-06 21:43:50.824774] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:30.583 [2024-12-06 21:43:50.825260] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:21:30.583 [2024-12-06 21:43:51.035717] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:30.583 [2024-12-06 21:43:51.036612] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:21:31.152 [2024-12-06 21:43:51.388942] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:31.152 [2024-12-06 21:43:51.389962] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:21:31.152 [2024-12-06 21:43:51.616810] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:31.411 21:43:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:31.411 [2024-12-06 21:43:51.837477] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:21:31.671 21:43:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:31.671 "name": "raid_bdev1", 00:21:31.671 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:31.671 "strip_size_kb": 0, 00:21:31.671 "state": "online", 00:21:31.671 "raid_level": "raid1", 00:21:31.671 "superblock": true, 00:21:31.671 "num_base_bdevs": 4, 00:21:31.671 "num_base_bdevs_discovered": 4, 00:21:31.671 "num_base_bdevs_operational": 4, 00:21:31.671 "process": { 00:21:31.671 "type": "rebuild", 00:21:31.671 "target": "spare", 00:21:31.671 "progress": { 00:21:31.671 "blocks": 14336, 00:21:31.671 "percent": 22 00:21:31.671 } 00:21:31.671 }, 00:21:31.671 "base_bdevs_list": [ 00:21:31.671 { 00:21:31.671 "name": "spare", 00:21:31.671 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:31.671 "is_configured": true, 00:21:31.671 "data_offset": 2048, 00:21:31.671 "data_size": 63488 00:21:31.671 }, 00:21:31.671 { 00:21:31.671 "name": "BaseBdev2", 00:21:31.671 "uuid": "2b865af4-708b-5389-b497-d5bc5f425100", 00:21:31.671 "is_configured": true, 00:21:31.671 "data_offset": 2048, 00:21:31.671 "data_size": 63488 00:21:31.671 }, 00:21:31.671 { 00:21:31.671 "name": "BaseBdev3", 00:21:31.671 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:31.671 "is_configured": true, 00:21:31.671 "data_offset": 2048, 00:21:31.671 "data_size": 63488 00:21:31.671 }, 00:21:31.671 { 00:21:31.671 "name": "BaseBdev4", 00:21:31.671 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:31.671 "is_configured": true, 00:21:31.672 "data_offset": 2048, 00:21:31.672 "data_size": 63488 00:21:31.672 } 00:21:31.672 ] 00:21:31.672 }' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:31.672 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:31.672 21:43:51 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:31.672 [2024-12-06 21:43:52.049430] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:31.672 [2024-12-06 21:43:52.049740] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:21:31.672 [2024-12-06 21:43:52.130420] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:31.932 [2024-12-06 21:43:52.355484] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005930 00:21:31.932 [2024-12-06 21:43:52.355686] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x50d000005ad0 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.192 "name": "raid_bdev1", 00:21:32.192 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:32.192 "strip_size_kb": 0, 00:21:32.192 "state": "online", 00:21:32.192 "raid_level": "raid1", 00:21:32.192 "superblock": true, 00:21:32.192 "num_base_bdevs": 4, 00:21:32.192 "num_base_bdevs_discovered": 3, 00:21:32.192 "num_base_bdevs_operational": 3, 00:21:32.192 "process": { 00:21:32.192 "type": "rebuild", 00:21:32.192 "target": "spare", 00:21:32.192 "progress": { 00:21:32.192 "blocks": 22528, 00:21:32.192 "percent": 35 00:21:32.192 } 00:21:32.192 }, 00:21:32.192 "base_bdevs_list": [ 00:21:32.192 { 00:21:32.192 "name": "spare", 00:21:32.192 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:32.192 "is_configured": true, 00:21:32.192 "data_offset": 2048, 00:21:32.192 "data_size": 63488 00:21:32.192 }, 00:21:32.192 { 00:21:32.192 "name": null, 00:21:32.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.192 "is_configured": false, 00:21:32.192 "data_offset": 2048, 00:21:32.192 "data_size": 63488 00:21:32.192 }, 00:21:32.192 { 00:21:32.192 "name": "BaseBdev3", 00:21:32.192 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:32.192 "is_configured": true, 00:21:32.192 "data_offset": 2048, 00:21:32.192 "data_size": 63488 00:21:32.192 }, 00:21:32.192 { 00:21:32.192 "name": "BaseBdev4", 00:21:32.192 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:32.192 "is_configured": true, 00:21:32.192 "data_offset": 2048, 00:21:32.192 "data_size": 63488 00:21:32.192 } 00:21:32.192 ] 00:21:32.192 }' 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.192 21:43:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@657 -- # local timeout=489 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:32.452 [2024-12-06 21:43:52.823718] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:32.452 "name": "raid_bdev1", 00:21:32.452 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:32.452 "strip_size_kb": 0, 00:21:32.452 "state": "online", 00:21:32.452 "raid_level": "raid1", 00:21:32.452 "superblock": true, 00:21:32.452 "num_base_bdevs": 4, 00:21:32.452 "num_base_bdevs_discovered": 3, 00:21:32.452 "num_base_bdevs_operational": 3, 00:21:32.452 "process": { 00:21:32.452 "type": "rebuild", 00:21:32.452 "target": "spare", 00:21:32.452 "progress": { 00:21:32.452 "blocks": 26624, 00:21:32.452 "percent": 41 00:21:32.452 } 00:21:32.452 }, 00:21:32.452 "base_bdevs_list": [ 00:21:32.452 { 00:21:32.452 "name": "spare", 00:21:32.452 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:32.452 "is_configured": true, 00:21:32.452 "data_offset": 2048, 00:21:32.452 "data_size": 63488 00:21:32.452 }, 00:21:32.452 { 00:21:32.452 "name": null, 00:21:32.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:32.452 "is_configured": false, 00:21:32.452 "data_offset": 2048, 00:21:32.452 "data_size": 63488 00:21:32.452 }, 00:21:32.452 { 00:21:32.452 "name": "BaseBdev3", 00:21:32.452 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:32.452 "is_configured": true, 00:21:32.452 "data_offset": 2048, 00:21:32.452 "data_size": 63488 00:21:32.452 }, 00:21:32.452 { 00:21:32.452 "name": "BaseBdev4", 00:21:32.452 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:32.452 "is_configured": true, 00:21:32.452 "data_offset": 2048, 00:21:32.452 "data_size": 63488 00:21:32.452 } 00:21:32.452 ] 00:21:32.452 }' 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:32.452 21:43:52 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:33.021 [2024-12-06 21:43:53.335043] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:21:33.281 [2024-12-06 21:43:53.669317] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:21:33.540 21:43:53 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:33.540 21:43:53 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:33.540 21:43:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:33.540 21:43:53 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:33.541 21:43:53 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:33.541 21:43:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:33.541 21:43:53 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:33.541 21:43:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:33.800 "name": "raid_bdev1", 00:21:33.800 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:33.800 "strip_size_kb": 0, 00:21:33.800 "state": "online", 00:21:33.800 "raid_level": "raid1", 00:21:33.800 "superblock": true, 00:21:33.800 "num_base_bdevs": 4, 00:21:33.800 "num_base_bdevs_discovered": 3, 00:21:33.800 "num_base_bdevs_operational": 3, 00:21:33.800 "process": { 00:21:33.800 "type": "rebuild", 00:21:33.800 "target": "spare", 00:21:33.800 "progress": { 00:21:33.800 "blocks": 47104, 00:21:33.800 "percent": 74 00:21:33.800 } 00:21:33.800 }, 00:21:33.800 "base_bdevs_list": [ 00:21:33.800 { 00:21:33.800 "name": "spare", 00:21:33.800 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": null, 00:21:33.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:33.800 "is_configured": false, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": "BaseBdev3", 00:21:33.800 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 }, 00:21:33.800 { 00:21:33.800 "name": "BaseBdev4", 00:21:33.800 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:33.800 "is_configured": true, 00:21:33.800 "data_offset": 2048, 00:21:33.800 "data_size": 63488 00:21:33.800 } 00:21:33.800 ] 00:21:33.800 }' 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:33.800 21:43:54 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:34.371 [2024-12-06 21:43:54.653358] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:34.371 [2024-12-06 21:43:54.653810] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:21:34.371 [2024-12-06 21:43:54.861744] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.939 [2024-12-06 21:43:55.187327] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:34.939 [2024-12-06 21:43:55.287334] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:34.939 [2024-12-06 21:43:55.289684] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.939 21:43:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:34.939 "name": "raid_bdev1", 00:21:34.939 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:34.939 "strip_size_kb": 0, 00:21:34.939 "state": "online", 00:21:34.939 "raid_level": "raid1", 00:21:34.939 "superblock": true, 00:21:34.939 "num_base_bdevs": 4, 00:21:34.939 "num_base_bdevs_discovered": 3, 00:21:34.939 "num_base_bdevs_operational": 3, 00:21:34.939 "base_bdevs_list": [ 00:21:34.939 { 00:21:34.939 "name": "spare", 00:21:34.939 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:34.939 "is_configured": true, 00:21:34.939 "data_offset": 2048, 00:21:34.939 "data_size": 63488 00:21:34.939 }, 00:21:34.939 { 00:21:34.939 "name": null, 00:21:34.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:34.939 "is_configured": false, 00:21:34.939 "data_offset": 2048, 00:21:34.939 "data_size": 63488 00:21:34.939 }, 00:21:34.939 { 00:21:34.939 "name": "BaseBdev3", 00:21:34.939 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:34.940 "is_configured": true, 00:21:34.940 "data_offset": 2048, 00:21:34.940 "data_size": 63488 00:21:34.940 }, 00:21:34.940 { 00:21:34.940 "name": "BaseBdev4", 00:21:34.940 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:34.940 "is_configured": true, 00:21:34.940 "data_offset": 2048, 00:21:34.940 "data_size": 63488 00:21:34.940 } 00:21:34.940 ] 00:21:34.940 }' 00:21:34.940 21:43:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:34.940 21:43:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:34.940 21:43:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@660 -- # break 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:35.198 "name": "raid_bdev1", 00:21:35.198 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:35.198 "strip_size_kb": 0, 00:21:35.198 "state": "online", 00:21:35.198 "raid_level": "raid1", 00:21:35.198 "superblock": true, 00:21:35.198 "num_base_bdevs": 4, 00:21:35.198 "num_base_bdevs_discovered": 3, 00:21:35.198 "num_base_bdevs_operational": 3, 00:21:35.198 "base_bdevs_list": [ 00:21:35.198 { 00:21:35.198 "name": "spare", 00:21:35.198 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:35.198 "is_configured": true, 00:21:35.198 "data_offset": 2048, 00:21:35.198 "data_size": 63488 00:21:35.198 }, 00:21:35.198 { 00:21:35.198 "name": null, 00:21:35.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.198 "is_configured": false, 00:21:35.198 "data_offset": 2048, 00:21:35.198 "data_size": 63488 00:21:35.198 }, 00:21:35.198 { 00:21:35.198 "name": "BaseBdev3", 00:21:35.198 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:35.198 "is_configured": true, 00:21:35.198 "data_offset": 2048, 00:21:35.198 "data_size": 63488 00:21:35.198 }, 00:21:35.198 { 00:21:35.198 "name": "BaseBdev4", 00:21:35.198 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:35.198 "is_configured": true, 00:21:35.198 "data_offset": 2048, 00:21:35.198 "data_size": 63488 00:21:35.198 } 00:21:35.198 ] 00:21:35.198 }' 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:35.198 21:43:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:35.457 "name": "raid_bdev1", 00:21:35.457 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:35.457 "strip_size_kb": 0, 00:21:35.457 "state": "online", 00:21:35.457 "raid_level": "raid1", 00:21:35.457 "superblock": true, 00:21:35.457 "num_base_bdevs": 4, 00:21:35.457 "num_base_bdevs_discovered": 3, 00:21:35.457 "num_base_bdevs_operational": 3, 00:21:35.457 "base_bdevs_list": [ 00:21:35.457 { 00:21:35.457 "name": "spare", 00:21:35.457 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:35.457 "is_configured": true, 00:21:35.457 "data_offset": 2048, 00:21:35.457 "data_size": 63488 00:21:35.457 }, 00:21:35.457 { 00:21:35.457 "name": null, 00:21:35.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:35.457 "is_configured": false, 00:21:35.457 "data_offset": 2048, 00:21:35.457 "data_size": 63488 00:21:35.457 }, 00:21:35.457 { 00:21:35.457 "name": "BaseBdev3", 00:21:35.457 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:35.457 "is_configured": true, 00:21:35.457 "data_offset": 2048, 00:21:35.457 "data_size": 63488 00:21:35.457 }, 00:21:35.457 { 00:21:35.457 "name": "BaseBdev4", 00:21:35.457 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:35.457 "is_configured": true, 00:21:35.457 "data_offset": 2048, 00:21:35.457 "data_size": 63488 00:21:35.457 } 00:21:35.457 ] 00:21:35.457 }' 00:21:35.457 21:43:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:35.457 21:43:55 -- common/autotest_common.sh@10 -- # set +x 00:21:35.715 21:43:56 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:35.974 [2024-12-06 21:43:56.345133] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:35.974 [2024-12-06 21:43:56.345172] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:35.974 00:21:35.974 Latency(us) 00:21:35.974 [2024-12-06T21:43:56.471Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.974 [2024-12-06T21:43:56.471Z] Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:35.974 raid_bdev1 : 9.21 105.50 316.50 0.00 0.00 12814.29 255.07 115819.99 00:21:35.974 [2024-12-06T21:43:56.471Z] =================================================================================================================== 00:21:35.974 [2024-12-06T21:43:56.471Z] Total : 105.50 316.50 0.00 0.00 12814.29 255.07 115819.99 00:21:35.974 [2024-12-06 21:43:56.388548] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:35.974 0 00:21:35.974 [2024-12-06 21:43:56.388773] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:35.974 [2024-12-06 21:43:56.388890] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:35.974 [2024-12-06 21:43:56.388911] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:21:35.974 21:43:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.974 21:43:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:36.233 21:43:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:36.233 21:43:56 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:36.233 21:43:56 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.233 21:43:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:36.493 /dev/nbd0 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.493 21:43:56 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:21:36.493 21:43:56 -- common/autotest_common.sh@867 -- # local i 00:21:36.493 21:43:56 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:36.493 21:43:56 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:36.493 21:43:56 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:21:36.493 21:43:56 -- common/autotest_common.sh@871 -- # break 00:21:36.493 21:43:56 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:36.493 21:43:56 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:36.493 21:43:56 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.493 1+0 records in 00:21:36.493 1+0 records out 00:21:36.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240401 s, 17.0 MB/s 00:21:36.493 21:43:56 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.493 21:43:56 -- common/autotest_common.sh@884 -- # size=4096 00:21:36.493 21:43:56 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.493 21:43:56 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:36.493 21:43:56 -- common/autotest_common.sh@887 -- # return 0 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@678 -- # continue 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:21:36.493 21:43:56 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:36.493 21:43:56 -- bdev/nbd_common.sh@12 -- # local i 00:21:36.494 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:36.494 21:43:56 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.494 21:43:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:21:36.753 /dev/nbd1 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:36.753 21:43:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:36.753 21:43:57 -- common/autotest_common.sh@867 -- # local i 00:21:36.753 21:43:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:36.753 21:43:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:36.753 21:43:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:36.753 21:43:57 -- common/autotest_common.sh@871 -- # break 00:21:36.753 21:43:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:36.753 21:43:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:36.753 21:43:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.753 1+0 records in 00:21:36.753 1+0 records out 00:21:36.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057271 s, 7.2 MB/s 00:21:36.753 21:43:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.753 21:43:57 -- common/autotest_common.sh@884 -- # size=4096 00:21:36.753 21:43:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.753 21:43:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:36.753 21:43:57 -- common/autotest_common.sh@887 -- # return 0 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.753 21:43:57 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:36.753 21:43:57 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@51 -- # local i 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.753 21:43:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.012 21:43:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@41 -- # break 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.273 21:43:57 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:37.273 21:43:57 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:21:37.273 21:43:57 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@12 -- # local i 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:21:37.273 /dev/nbd1 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:37.273 21:43:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:37.273 21:43:57 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:21:37.273 21:43:57 -- common/autotest_common.sh@867 -- # local i 00:21:37.273 21:43:57 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:21:37.273 21:43:57 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:21:37.273 21:43:57 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:21:37.273 21:43:57 -- common/autotest_common.sh@871 -- # break 00:21:37.273 21:43:57 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:21:37.273 21:43:57 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:21:37.273 21:43:57 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:37.273 1+0 records in 00:21:37.273 1+0 records out 00:21:37.532 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671299 s, 6.1 MB/s 00:21:37.533 21:43:57 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.533 21:43:57 -- common/autotest_common.sh@884 -- # size=4096 00:21:37.533 21:43:57 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:37.533 21:43:57 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:21:37.533 21:43:57 -- common/autotest_common.sh@887 -- # return 0 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:37.533 21:43:57 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:37.533 21:43:57 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@51 -- # local i 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.533 21:43:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@41 -- # break 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.792 21:43:58 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@51 -- # local i 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@41 -- # break 00:21:37.792 21:43:58 -- bdev/nbd_common.sh@45 -- # return 0 00:21:37.792 21:43:58 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:37.792 21:43:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:37.792 21:43:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:37.792 21:43:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:38.051 21:43:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:38.310 [2024-12-06 21:43:58.589114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:38.310 [2024-12-06 21:43:58.589223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.310 [2024-12-06 21:43:58.589257] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:21:38.310 [2024-12-06 21:43:58.589275] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.310 [2024-12-06 21:43:58.591745] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.310 [2024-12-06 21:43:58.591819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:38.310 [2024-12-06 21:43:58.591913] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:38.310 [2024-12-06 21:43:58.591977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:38.310 BaseBdev1 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@696 -- # continue 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:38.310 21:43:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:38.570 [2024-12-06 21:43:58.945186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:38.570 [2024-12-06 21:43:58.945249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:38.570 [2024-12-06 21:43:58.945277] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:21:38.570 [2024-12-06 21:43:58.945294] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:38.570 [2024-12-06 21:43:58.945751] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:38.570 [2024-12-06 21:43:58.945835] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:38.570 [2024-12-06 21:43:58.945959] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:38.570 [2024-12-06 21:43:58.945986] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:38.570 [2024-12-06 21:43:58.945998] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:38.570 [2024-12-06 21:43:58.946030] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state configuring 00:21:38.570 [2024-12-06 21:43:58.946094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:38.570 BaseBdev3 00:21:38.570 21:43:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:38.570 21:43:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:38.570 21:43:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:38.829 21:43:59 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:39.089 [2024-12-06 21:43:59.369317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:39.089 [2024-12-06 21:43:59.369380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.089 [2024-12-06 21:43:59.369413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:21:39.089 [2024-12-06 21:43:59.369426] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.089 [2024-12-06 21:43:59.369948] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.089 [2024-12-06 21:43:59.369995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:39.089 [2024-12-06 21:43:59.370116] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:39.089 [2024-12-06 21:43:59.370144] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:39.089 BaseBdev4 00:21:39.089 21:43:59 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:39.089 21:43:59 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:39.348 [2024-12-06 21:43:59.725471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:39.348 [2024-12-06 21:43:59.725539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:39.348 [2024-12-06 21:43:59.725570] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:21:39.348 [2024-12-06 21:43:59.725584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:39.348 [2024-12-06 21:43:59.726023] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:39.348 [2024-12-06 21:43:59.726057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:39.348 [2024-12-06 21:43:59.726129] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:39.348 [2024-12-06 21:43:59.726157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:39.348 spare 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.348 21:43:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:39.348 [2024-12-06 21:43:59.826344] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c380 00:21:39.348 [2024-12-06 21:43:59.826373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:39.348 [2024-12-06 21:43:59.826541] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000036870 00:21:39.348 [2024-12-06 21:43:59.826992] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c380 00:21:39.349 [2024-12-06 21:43:59.827021] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c380 00:21:39.349 [2024-12-06 21:43:59.827188] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:39.608 21:43:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:39.608 "name": "raid_bdev1", 00:21:39.608 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:39.608 "strip_size_kb": 0, 00:21:39.608 "state": "online", 00:21:39.608 "raid_level": "raid1", 00:21:39.608 "superblock": true, 00:21:39.608 "num_base_bdevs": 4, 00:21:39.608 "num_base_bdevs_discovered": 3, 00:21:39.608 "num_base_bdevs_operational": 3, 00:21:39.608 "base_bdevs_list": [ 00:21:39.608 { 00:21:39.608 "name": "spare", 00:21:39.608 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:39.608 "is_configured": true, 00:21:39.608 "data_offset": 2048, 00:21:39.608 "data_size": 63488 00:21:39.608 }, 00:21:39.608 { 00:21:39.608 "name": null, 00:21:39.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:39.608 "is_configured": false, 00:21:39.608 "data_offset": 2048, 00:21:39.608 "data_size": 63488 00:21:39.608 }, 00:21:39.608 { 00:21:39.608 "name": "BaseBdev3", 00:21:39.608 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:39.608 "is_configured": true, 00:21:39.608 "data_offset": 2048, 00:21:39.608 "data_size": 63488 00:21:39.608 }, 00:21:39.608 { 00:21:39.608 "name": "BaseBdev4", 00:21:39.608 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:39.608 "is_configured": true, 00:21:39.608 "data_offset": 2048, 00:21:39.608 "data_size": 63488 00:21:39.608 } 00:21:39.608 ] 00:21:39.608 }' 00:21:39.608 21:43:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:39.608 21:43:59 -- common/autotest_common.sh@10 -- # set +x 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:39.867 21:44:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:40.126 21:44:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:40.126 "name": "raid_bdev1", 00:21:40.126 "uuid": "59d6e619-1ba7-4e8b-9008-11c1c39400b9", 00:21:40.126 "strip_size_kb": 0, 00:21:40.126 "state": "online", 00:21:40.126 "raid_level": "raid1", 00:21:40.126 "superblock": true, 00:21:40.126 "num_base_bdevs": 4, 00:21:40.126 "num_base_bdevs_discovered": 3, 00:21:40.126 "num_base_bdevs_operational": 3, 00:21:40.126 "base_bdevs_list": [ 00:21:40.126 { 00:21:40.126 "name": "spare", 00:21:40.126 "uuid": "5deca72f-2755-50a5-9472-de68b3c95985", 00:21:40.126 "is_configured": true, 00:21:40.126 "data_offset": 2048, 00:21:40.126 "data_size": 63488 00:21:40.126 }, 00:21:40.126 { 00:21:40.126 "name": null, 00:21:40.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:40.126 "is_configured": false, 00:21:40.126 "data_offset": 2048, 00:21:40.126 "data_size": 63488 00:21:40.126 }, 00:21:40.126 { 00:21:40.126 "name": "BaseBdev3", 00:21:40.127 "uuid": "b94d34c5-01bd-5412-8ff9-8debaac93909", 00:21:40.127 "is_configured": true, 00:21:40.127 "data_offset": 2048, 00:21:40.127 "data_size": 63488 00:21:40.127 }, 00:21:40.127 { 00:21:40.127 "name": "BaseBdev4", 00:21:40.127 "uuid": "38e15753-868b-53f7-94fa-a1168d800e72", 00:21:40.127 "is_configured": true, 00:21:40.127 "data_offset": 2048, 00:21:40.127 "data_size": 63488 00:21:40.127 } 00:21:40.127 ] 00:21:40.127 }' 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:40.127 21:44:00 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:40.386 21:44:00 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:40.386 21:44:00 -- bdev/bdev_raid.sh@709 -- # killprocess 81585 00:21:40.386 21:44:00 -- common/autotest_common.sh@936 -- # '[' -z 81585 ']' 00:21:40.386 21:44:00 -- common/autotest_common.sh@940 -- # kill -0 81585 00:21:40.386 21:44:00 -- common/autotest_common.sh@941 -- # uname 00:21:40.386 21:44:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.386 21:44:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81585 00:21:40.386 killing process with pid 81585 00:21:40.386 Received shutdown signal, test time was about 13.606354 seconds 00:21:40.386 00:21:40.386 Latency(us) 00:21:40.386 [2024-12-06T21:44:00.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.386 [2024-12-06T21:44:00.883Z] =================================================================================================================== 00:21:40.386 [2024-12-06T21:44:00.883Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.386 21:44:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.386 21:44:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.386 21:44:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81585' 00:21:40.386 21:44:00 -- common/autotest_common.sh@955 -- # kill 81585 00:21:40.386 21:44:00 -- common/autotest_common.sh@960 -- # wait 81585 00:21:40.386 [2024-12-06 21:44:00.766030] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:40.386 [2024-12-06 21:44:00.766127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:40.386 [2024-12-06 21:44:00.766267] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:40.386 [2024-12-06 21:44:00.766289] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c380 name raid_bdev1, state offline 00:21:40.645 [2024-12-06 21:44:01.046609] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:41.581 21:44:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:41.581 00:21:41.581 real 0m19.380s 00:21:41.581 user 0m29.368s 00:21:41.581 sys 0m2.599s 00:21:41.581 ************************************ 00:21:41.581 END TEST raid_rebuild_test_sb_io 00:21:41.581 ************************************ 00:21:41.581 21:44:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:41.581 21:44:01 -- common/autotest_common.sh@10 -- # set +x 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:21:41.581 21:44:02 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:41.581 21:44:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:41.581 21:44:02 -- common/autotest_common.sh@10 -- # set +x 00:21:41.581 ************************************ 00:21:41.581 START TEST raid5f_state_function_test 00:21:41.581 ************************************ 00:21:41.581 21:44:02 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 false 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=82128 00:21:41.581 Process raid pid: 82128 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82128' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82128 /var/tmp/spdk-raid.sock 00:21:41.581 21:44:02 -- common/autotest_common.sh@829 -- # '[' -z 82128 ']' 00:21:41.581 21:44:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:41.581 21:44:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:41.581 21:44:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:41.581 21:44:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:41.581 21:44:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.581 21:44:02 -- common/autotest_common.sh@10 -- # set +x 00:21:41.839 [2024-12-06 21:44:02.115995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:41.839 [2024-12-06 21:44:02.116148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.839 [2024-12-06 21:44:02.281212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.097 [2024-12-06 21:44:02.429378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.097 [2024-12-06 21:44:02.572951] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:42.665 21:44:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.665 21:44:02 -- common/autotest_common.sh@862 -- # return 0 00:21:42.665 21:44:02 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:42.924 [2024-12-06 21:44:03.208274] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:42.924 [2024-12-06 21:44:03.208356] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:42.924 [2024-12-06 21:44:03.208370] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:42.924 [2024-12-06 21:44:03.208384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:42.924 [2024-12-06 21:44:03.208392] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:42.924 [2024-12-06 21:44:03.208403] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.924 21:44:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:43.183 21:44:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:43.183 "name": "Existed_Raid", 00:21:43.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.183 "strip_size_kb": 64, 00:21:43.183 "state": "configuring", 00:21:43.183 "raid_level": "raid5f", 00:21:43.183 "superblock": false, 00:21:43.183 "num_base_bdevs": 3, 00:21:43.183 "num_base_bdevs_discovered": 0, 00:21:43.183 "num_base_bdevs_operational": 3, 00:21:43.183 "base_bdevs_list": [ 00:21:43.183 { 00:21:43.183 "name": "BaseBdev1", 00:21:43.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.183 "is_configured": false, 00:21:43.183 "data_offset": 0, 00:21:43.183 "data_size": 0 00:21:43.183 }, 00:21:43.183 { 00:21:43.183 "name": "BaseBdev2", 00:21:43.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.183 "is_configured": false, 00:21:43.183 "data_offset": 0, 00:21:43.183 "data_size": 0 00:21:43.183 }, 00:21:43.183 { 00:21:43.183 "name": "BaseBdev3", 00:21:43.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:43.183 "is_configured": false, 00:21:43.183 "data_offset": 0, 00:21:43.183 "data_size": 0 00:21:43.183 } 00:21:43.183 ] 00:21:43.183 }' 00:21:43.183 21:44:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:43.183 21:44:03 -- common/autotest_common.sh@10 -- # set +x 00:21:43.442 21:44:03 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:43.442 [2024-12-06 21:44:03.928383] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:43.442 [2024-12-06 21:44:03.928444] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:43.702 21:44:03 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:43.702 [2024-12-06 21:44:04.112549] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:43.702 [2024-12-06 21:44:04.112627] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:43.702 [2024-12-06 21:44:04.112639] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:43.702 [2024-12-06 21:44:04.112655] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:43.702 [2024-12-06 21:44:04.112663] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:43.702 [2024-12-06 21:44:04.112675] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:43.702 21:44:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:43.962 [2024-12-06 21:44:04.305184] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:43.962 BaseBdev1 00:21:43.962 21:44:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:43.962 21:44:04 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:43.962 21:44:04 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:43.962 21:44:04 -- common/autotest_common.sh@899 -- # local i 00:21:43.962 21:44:04 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:43.962 21:44:04 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:43.962 21:44:04 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:44.221 21:44:04 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:44.221 [ 00:21:44.221 { 00:21:44.221 "name": "BaseBdev1", 00:21:44.221 "aliases": [ 00:21:44.221 "41fd8db8-2622-4268-9053-5da6f4ffb42f" 00:21:44.221 ], 00:21:44.221 "product_name": "Malloc disk", 00:21:44.221 "block_size": 512, 00:21:44.221 "num_blocks": 65536, 00:21:44.221 "uuid": "41fd8db8-2622-4268-9053-5da6f4ffb42f", 00:21:44.221 "assigned_rate_limits": { 00:21:44.221 "rw_ios_per_sec": 0, 00:21:44.221 "rw_mbytes_per_sec": 0, 00:21:44.221 "r_mbytes_per_sec": 0, 00:21:44.221 "w_mbytes_per_sec": 0 00:21:44.221 }, 00:21:44.221 "claimed": true, 00:21:44.221 "claim_type": "exclusive_write", 00:21:44.221 "zoned": false, 00:21:44.221 "supported_io_types": { 00:21:44.221 "read": true, 00:21:44.221 "write": true, 00:21:44.221 "unmap": true, 00:21:44.221 "write_zeroes": true, 00:21:44.221 "flush": true, 00:21:44.221 "reset": true, 00:21:44.221 "compare": false, 00:21:44.221 "compare_and_write": false, 00:21:44.221 "abort": true, 00:21:44.221 "nvme_admin": false, 00:21:44.221 "nvme_io": false 00:21:44.221 }, 00:21:44.221 "memory_domains": [ 00:21:44.221 { 00:21:44.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:44.221 "dma_device_type": 2 00:21:44.221 } 00:21:44.221 ], 00:21:44.221 "driver_specific": {} 00:21:44.221 } 00:21:44.221 ] 00:21:44.221 21:44:04 -- common/autotest_common.sh@905 -- # return 0 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.221 21:44:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:44.480 21:44:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:44.480 "name": "Existed_Raid", 00:21:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.481 "strip_size_kb": 64, 00:21:44.481 "state": "configuring", 00:21:44.481 "raid_level": "raid5f", 00:21:44.481 "superblock": false, 00:21:44.481 "num_base_bdevs": 3, 00:21:44.481 "num_base_bdevs_discovered": 1, 00:21:44.481 "num_base_bdevs_operational": 3, 00:21:44.481 "base_bdevs_list": [ 00:21:44.481 { 00:21:44.481 "name": "BaseBdev1", 00:21:44.481 "uuid": "41fd8db8-2622-4268-9053-5da6f4ffb42f", 00:21:44.481 "is_configured": true, 00:21:44.481 "data_offset": 0, 00:21:44.481 "data_size": 65536 00:21:44.481 }, 00:21:44.481 { 00:21:44.481 "name": "BaseBdev2", 00:21:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.481 "is_configured": false, 00:21:44.481 "data_offset": 0, 00:21:44.481 "data_size": 0 00:21:44.481 }, 00:21:44.481 { 00:21:44.481 "name": "BaseBdev3", 00:21:44.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:44.481 "is_configured": false, 00:21:44.481 "data_offset": 0, 00:21:44.481 "data_size": 0 00:21:44.481 } 00:21:44.481 ] 00:21:44.481 }' 00:21:44.481 21:44:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:44.481 21:44:04 -- common/autotest_common.sh@10 -- # set +x 00:21:44.740 21:44:05 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:44.998 [2024-12-06 21:44:05.393449] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:44.998 [2024-12-06 21:44:05.393510] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:44.998 21:44:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:21:44.998 21:44:05 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:45.255 [2024-12-06 21:44:05.573535] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:45.255 [2024-12-06 21:44:05.575277] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:45.255 [2024-12-06 21:44:05.575339] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:45.255 [2024-12-06 21:44:05.575352] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:45.255 [2024-12-06 21:44:05.575366] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.255 21:44:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:45.513 21:44:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.513 "name": "Existed_Raid", 00:21:45.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.513 "strip_size_kb": 64, 00:21:45.513 "state": "configuring", 00:21:45.513 "raid_level": "raid5f", 00:21:45.513 "superblock": false, 00:21:45.513 "num_base_bdevs": 3, 00:21:45.513 "num_base_bdevs_discovered": 1, 00:21:45.513 "num_base_bdevs_operational": 3, 00:21:45.513 "base_bdevs_list": [ 00:21:45.513 { 00:21:45.513 "name": "BaseBdev1", 00:21:45.513 "uuid": "41fd8db8-2622-4268-9053-5da6f4ffb42f", 00:21:45.513 "is_configured": true, 00:21:45.513 "data_offset": 0, 00:21:45.513 "data_size": 65536 00:21:45.513 }, 00:21:45.513 { 00:21:45.513 "name": "BaseBdev2", 00:21:45.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.513 "is_configured": false, 00:21:45.513 "data_offset": 0, 00:21:45.513 "data_size": 0 00:21:45.513 }, 00:21:45.513 { 00:21:45.513 "name": "BaseBdev3", 00:21:45.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.513 "is_configured": false, 00:21:45.513 "data_offset": 0, 00:21:45.513 "data_size": 0 00:21:45.513 } 00:21:45.513 ] 00:21:45.513 }' 00:21:45.513 21:44:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.513 21:44:05 -- common/autotest_common.sh@10 -- # set +x 00:21:45.771 21:44:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:46.029 [2024-12-06 21:44:06.357820] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:46.029 BaseBdev2 00:21:46.029 21:44:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:46.029 21:44:06 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:46.029 21:44:06 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:46.029 21:44:06 -- common/autotest_common.sh@899 -- # local i 00:21:46.029 21:44:06 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:46.029 21:44:06 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:46.029 21:44:06 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:46.287 21:44:06 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:46.544 [ 00:21:46.544 { 00:21:46.544 "name": "BaseBdev2", 00:21:46.544 "aliases": [ 00:21:46.544 "6880383f-2797-4ab2-b7a3-9c07cac8c713" 00:21:46.544 ], 00:21:46.544 "product_name": "Malloc disk", 00:21:46.544 "block_size": 512, 00:21:46.544 "num_blocks": 65536, 00:21:46.544 "uuid": "6880383f-2797-4ab2-b7a3-9c07cac8c713", 00:21:46.544 "assigned_rate_limits": { 00:21:46.544 "rw_ios_per_sec": 0, 00:21:46.544 "rw_mbytes_per_sec": 0, 00:21:46.544 "r_mbytes_per_sec": 0, 00:21:46.544 "w_mbytes_per_sec": 0 00:21:46.544 }, 00:21:46.544 "claimed": true, 00:21:46.544 "claim_type": "exclusive_write", 00:21:46.544 "zoned": false, 00:21:46.544 "supported_io_types": { 00:21:46.544 "read": true, 00:21:46.544 "write": true, 00:21:46.544 "unmap": true, 00:21:46.544 "write_zeroes": true, 00:21:46.544 "flush": true, 00:21:46.544 "reset": true, 00:21:46.544 "compare": false, 00:21:46.544 "compare_and_write": false, 00:21:46.544 "abort": true, 00:21:46.544 "nvme_admin": false, 00:21:46.544 "nvme_io": false 00:21:46.544 }, 00:21:46.544 "memory_domains": [ 00:21:46.544 { 00:21:46.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:46.544 "dma_device_type": 2 00:21:46.544 } 00:21:46.544 ], 00:21:46.544 "driver_specific": {} 00:21:46.544 } 00:21:46.544 ] 00:21:46.544 21:44:06 -- common/autotest_common.sh@905 -- # return 0 00:21:46.544 21:44:06 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:46.544 21:44:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.545 21:44:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:46.803 21:44:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:46.803 "name": "Existed_Raid", 00:21:46.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.803 "strip_size_kb": 64, 00:21:46.803 "state": "configuring", 00:21:46.803 "raid_level": "raid5f", 00:21:46.803 "superblock": false, 00:21:46.803 "num_base_bdevs": 3, 00:21:46.803 "num_base_bdevs_discovered": 2, 00:21:46.803 "num_base_bdevs_operational": 3, 00:21:46.803 "base_bdevs_list": [ 00:21:46.803 { 00:21:46.803 "name": "BaseBdev1", 00:21:46.803 "uuid": "41fd8db8-2622-4268-9053-5da6f4ffb42f", 00:21:46.803 "is_configured": true, 00:21:46.803 "data_offset": 0, 00:21:46.803 "data_size": 65536 00:21:46.803 }, 00:21:46.803 { 00:21:46.803 "name": "BaseBdev2", 00:21:46.803 "uuid": "6880383f-2797-4ab2-b7a3-9c07cac8c713", 00:21:46.803 "is_configured": true, 00:21:46.803 "data_offset": 0, 00:21:46.803 "data_size": 65536 00:21:46.803 }, 00:21:46.803 { 00:21:46.803 "name": "BaseBdev3", 00:21:46.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.803 "is_configured": false, 00:21:46.803 "data_offset": 0, 00:21:46.803 "data_size": 0 00:21:46.803 } 00:21:46.803 ] 00:21:46.803 }' 00:21:46.803 21:44:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:46.803 21:44:07 -- common/autotest_common.sh@10 -- # set +x 00:21:47.061 21:44:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:47.061 [2024-12-06 21:44:07.549062] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:47.061 [2024-12-06 21:44:07.549141] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:21:47.061 [2024-12-06 21:44:07.549156] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:21:47.061 [2024-12-06 21:44:07.549250] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:21:47.061 [2024-12-06 21:44:07.553710] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:21:47.061 [2024-12-06 21:44:07.553736] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:21:47.062 [2024-12-06 21:44:07.554074] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:47.062 BaseBdev3 00:21:47.320 21:44:07 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:47.320 21:44:07 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:47.320 21:44:07 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:47.320 21:44:07 -- common/autotest_common.sh@899 -- # local i 00:21:47.320 21:44:07 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:47.320 21:44:07 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:47.320 21:44:07 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:47.320 21:44:07 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:47.578 [ 00:21:47.578 { 00:21:47.578 "name": "BaseBdev3", 00:21:47.578 "aliases": [ 00:21:47.578 "5a43c78e-00de-404b-a204-a696aaee607d" 00:21:47.578 ], 00:21:47.578 "product_name": "Malloc disk", 00:21:47.578 "block_size": 512, 00:21:47.578 "num_blocks": 65536, 00:21:47.578 "uuid": "5a43c78e-00de-404b-a204-a696aaee607d", 00:21:47.578 "assigned_rate_limits": { 00:21:47.578 "rw_ios_per_sec": 0, 00:21:47.578 "rw_mbytes_per_sec": 0, 00:21:47.578 "r_mbytes_per_sec": 0, 00:21:47.578 "w_mbytes_per_sec": 0 00:21:47.578 }, 00:21:47.578 "claimed": true, 00:21:47.578 "claim_type": "exclusive_write", 00:21:47.578 "zoned": false, 00:21:47.578 "supported_io_types": { 00:21:47.578 "read": true, 00:21:47.578 "write": true, 00:21:47.578 "unmap": true, 00:21:47.578 "write_zeroes": true, 00:21:47.578 "flush": true, 00:21:47.578 "reset": true, 00:21:47.578 "compare": false, 00:21:47.578 "compare_and_write": false, 00:21:47.578 "abort": true, 00:21:47.578 "nvme_admin": false, 00:21:47.578 "nvme_io": false 00:21:47.578 }, 00:21:47.578 "memory_domains": [ 00:21:47.578 { 00:21:47.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:47.578 "dma_device_type": 2 00:21:47.578 } 00:21:47.578 ], 00:21:47.578 "driver_specific": {} 00:21:47.578 } 00:21:47.578 ] 00:21:47.578 21:44:07 -- common/autotest_common.sh@905 -- # return 0 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:47.578 21:44:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.837 21:44:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:47.837 "name": "Existed_Raid", 00:21:47.837 "uuid": "dec83ccb-7ea9-4023-86c3-551c3caf76ae", 00:21:47.837 "strip_size_kb": 64, 00:21:47.837 "state": "online", 00:21:47.837 "raid_level": "raid5f", 00:21:47.837 "superblock": false, 00:21:47.837 "num_base_bdevs": 3, 00:21:47.837 "num_base_bdevs_discovered": 3, 00:21:47.837 "num_base_bdevs_operational": 3, 00:21:47.837 "base_bdevs_list": [ 00:21:47.837 { 00:21:47.837 "name": "BaseBdev1", 00:21:47.837 "uuid": "41fd8db8-2622-4268-9053-5da6f4ffb42f", 00:21:47.837 "is_configured": true, 00:21:47.837 "data_offset": 0, 00:21:47.837 "data_size": 65536 00:21:47.837 }, 00:21:47.837 { 00:21:47.837 "name": "BaseBdev2", 00:21:47.837 "uuid": "6880383f-2797-4ab2-b7a3-9c07cac8c713", 00:21:47.837 "is_configured": true, 00:21:47.837 "data_offset": 0, 00:21:47.837 "data_size": 65536 00:21:47.837 }, 00:21:47.837 { 00:21:47.837 "name": "BaseBdev3", 00:21:47.837 "uuid": "5a43c78e-00de-404b-a204-a696aaee607d", 00:21:47.837 "is_configured": true, 00:21:47.837 "data_offset": 0, 00:21:47.837 "data_size": 65536 00:21:47.837 } 00:21:47.837 ] 00:21:47.837 }' 00:21:47.837 21:44:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:47.837 21:44:08 -- common/autotest_common.sh@10 -- # set +x 00:21:48.095 21:44:08 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:48.354 [2024-12-06 21:44:08.687262] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:48.354 21:44:08 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:48.355 21:44:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.612 21:44:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:48.612 "name": "Existed_Raid", 00:21:48.612 "uuid": "dec83ccb-7ea9-4023-86c3-551c3caf76ae", 00:21:48.612 "strip_size_kb": 64, 00:21:48.612 "state": "online", 00:21:48.612 "raid_level": "raid5f", 00:21:48.612 "superblock": false, 00:21:48.612 "num_base_bdevs": 3, 00:21:48.612 "num_base_bdevs_discovered": 2, 00:21:48.612 "num_base_bdevs_operational": 2, 00:21:48.612 "base_bdevs_list": [ 00:21:48.612 { 00:21:48.612 "name": null, 00:21:48.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.612 "is_configured": false, 00:21:48.612 "data_offset": 0, 00:21:48.613 "data_size": 65536 00:21:48.613 }, 00:21:48.613 { 00:21:48.613 "name": "BaseBdev2", 00:21:48.613 "uuid": "6880383f-2797-4ab2-b7a3-9c07cac8c713", 00:21:48.613 "is_configured": true, 00:21:48.613 "data_offset": 0, 00:21:48.613 "data_size": 65536 00:21:48.613 }, 00:21:48.613 { 00:21:48.613 "name": "BaseBdev3", 00:21:48.613 "uuid": "5a43c78e-00de-404b-a204-a696aaee607d", 00:21:48.613 "is_configured": true, 00:21:48.613 "data_offset": 0, 00:21:48.613 "data_size": 65536 00:21:48.613 } 00:21:48.613 ] 00:21:48.613 }' 00:21:48.613 21:44:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:48.613 21:44:08 -- common/autotest_common.sh@10 -- # set +x 00:21:48.870 21:44:09 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:48.870 21:44:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:48.870 21:44:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.870 21:44:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:49.127 21:44:09 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:49.127 21:44:09 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:49.127 21:44:09 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:49.386 [2024-12-06 21:44:09.704940] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:49.386 [2024-12-06 21:44:09.704970] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:49.386 [2024-12-06 21:44:09.705022] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:49.386 21:44:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:49.386 21:44:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:49.386 21:44:09 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.386 21:44:09 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:49.644 21:44:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:49.644 21:44:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:49.644 21:44:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:21:49.903 [2024-12-06 21:44:10.191122] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:21:49.903 [2024-12-06 21:44:10.191181] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:21:49.903 21:44:10 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:49.903 21:44:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:49.903 21:44:10 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:49.903 21:44:10 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:21:50.162 21:44:10 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:21:50.162 21:44:10 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:21:50.162 21:44:10 -- bdev/bdev_raid.sh@287 -- # killprocess 82128 00:21:50.162 21:44:10 -- common/autotest_common.sh@936 -- # '[' -z 82128 ']' 00:21:50.162 21:44:10 -- common/autotest_common.sh@940 -- # kill -0 82128 00:21:50.162 21:44:10 -- common/autotest_common.sh@941 -- # uname 00:21:50.162 21:44:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:50.162 21:44:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82128 00:21:50.162 killing process with pid 82128 00:21:50.162 21:44:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:50.162 21:44:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:50.162 21:44:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82128' 00:21:50.162 21:44:10 -- common/autotest_common.sh@955 -- # kill 82128 00:21:50.162 21:44:10 -- common/autotest_common.sh@960 -- # wait 82128 00:21:50.162 [2024-12-06 21:44:10.547243] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:50.162 [2024-12-06 21:44:10.547343] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@289 -- # return 0 00:21:51.096 00:21:51.096 real 0m9.405s 00:21:51.096 user 0m15.617s 00:21:51.096 sys 0m1.405s 00:21:51.096 21:44:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:51.096 ************************************ 00:21:51.096 END TEST raid5f_state_function_test 00:21:51.096 ************************************ 00:21:51.096 21:44:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:21:51.096 21:44:11 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:21:51.096 21:44:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:51.096 21:44:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.096 ************************************ 00:21:51.096 START TEST raid5f_state_function_test_sb 00:21:51.096 ************************************ 00:21:51.096 21:44:11 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 3 true 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:21:51.096 Process raid pid: 82455 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@226 -- # raid_pid=82455 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 82455' 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@228 -- # waitforlisten 82455 /var/tmp/spdk-raid.sock 00:21:51.096 21:44:11 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:21:51.096 21:44:11 -- common/autotest_common.sh@829 -- # '[' -z 82455 ']' 00:21:51.096 21:44:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:51.096 21:44:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:51.096 21:44:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:51.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:51.096 21:44:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:51.096 21:44:11 -- common/autotest_common.sh@10 -- # set +x 00:21:51.096 [2024-12-06 21:44:11.576186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:51.096 [2024-12-06 21:44:11.576560] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.355 [2024-12-06 21:44:11.740961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.613 [2024-12-06 21:44:11.905572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.613 [2024-12-06 21:44:12.049015] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:52.181 21:44:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.181 21:44:12 -- common/autotest_common.sh@862 -- # return 0 00:21:52.181 21:44:12 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:52.440 [2024-12-06 21:44:12.748754] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:52.440 [2024-12-06 21:44:12.748807] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:52.440 [2024-12-06 21:44:12.748821] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:52.440 [2024-12-06 21:44:12.748833] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:52.440 [2024-12-06 21:44:12.748841] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:52.440 [2024-12-06 21:44:12.748853] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.440 21:44:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:52.699 21:44:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:52.699 "name": "Existed_Raid", 00:21:52.699 "uuid": "901930a3-c3f7-4b4b-9bba-7526be4c82e3", 00:21:52.699 "strip_size_kb": 64, 00:21:52.699 "state": "configuring", 00:21:52.699 "raid_level": "raid5f", 00:21:52.699 "superblock": true, 00:21:52.699 "num_base_bdevs": 3, 00:21:52.699 "num_base_bdevs_discovered": 0, 00:21:52.699 "num_base_bdevs_operational": 3, 00:21:52.699 "base_bdevs_list": [ 00:21:52.699 { 00:21:52.699 "name": "BaseBdev1", 00:21:52.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.699 "is_configured": false, 00:21:52.699 "data_offset": 0, 00:21:52.699 "data_size": 0 00:21:52.699 }, 00:21:52.699 { 00:21:52.699 "name": "BaseBdev2", 00:21:52.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.699 "is_configured": false, 00:21:52.699 "data_offset": 0, 00:21:52.699 "data_size": 0 00:21:52.699 }, 00:21:52.699 { 00:21:52.699 "name": "BaseBdev3", 00:21:52.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:52.699 "is_configured": false, 00:21:52.699 "data_offset": 0, 00:21:52.699 "data_size": 0 00:21:52.699 } 00:21:52.699 ] 00:21:52.699 }' 00:21:52.699 21:44:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:52.699 21:44:13 -- common/autotest_common.sh@10 -- # set +x 00:21:52.957 21:44:13 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:53.216 [2024-12-06 21:44:13.544804] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:53.216 [2024-12-06 21:44:13.544994] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:21:53.216 21:44:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:53.476 [2024-12-06 21:44:13.728908] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:21:53.476 [2024-12-06 21:44:13.728956] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:21:53.476 [2024-12-06 21:44:13.728969] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:53.476 [2024-12-06 21:44:13.728984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:53.476 [2024-12-06 21:44:13.728992] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:53.476 [2024-12-06 21:44:13.729003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:53.476 21:44:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:53.735 [2024-12-06 21:44:13.983285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:53.735 BaseBdev1 00:21:53.735 21:44:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:21:53.735 21:44:13 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:53.735 21:44:13 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:53.735 21:44:13 -- common/autotest_common.sh@899 -- # local i 00:21:53.735 21:44:13 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:53.735 21:44:13 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:53.735 21:44:13 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:53.735 21:44:14 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:53.994 [ 00:21:53.994 { 00:21:53.994 "name": "BaseBdev1", 00:21:53.994 "aliases": [ 00:21:53.994 "ab2736c4-6c29-47e7-a111-e44e966de965" 00:21:53.994 ], 00:21:53.994 "product_name": "Malloc disk", 00:21:53.994 "block_size": 512, 00:21:53.994 "num_blocks": 65536, 00:21:53.994 "uuid": "ab2736c4-6c29-47e7-a111-e44e966de965", 00:21:53.994 "assigned_rate_limits": { 00:21:53.994 "rw_ios_per_sec": 0, 00:21:53.994 "rw_mbytes_per_sec": 0, 00:21:53.994 "r_mbytes_per_sec": 0, 00:21:53.994 "w_mbytes_per_sec": 0 00:21:53.994 }, 00:21:53.994 "claimed": true, 00:21:53.994 "claim_type": "exclusive_write", 00:21:53.994 "zoned": false, 00:21:53.994 "supported_io_types": { 00:21:53.994 "read": true, 00:21:53.994 "write": true, 00:21:53.994 "unmap": true, 00:21:53.994 "write_zeroes": true, 00:21:53.994 "flush": true, 00:21:53.994 "reset": true, 00:21:53.994 "compare": false, 00:21:53.994 "compare_and_write": false, 00:21:53.994 "abort": true, 00:21:53.994 "nvme_admin": false, 00:21:53.994 "nvme_io": false 00:21:53.994 }, 00:21:53.994 "memory_domains": [ 00:21:53.994 { 00:21:53.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:53.994 "dma_device_type": 2 00:21:53.994 } 00:21:53.994 ], 00:21:53.994 "driver_specific": {} 00:21:53.994 } 00:21:53.994 ] 00:21:53.994 21:44:14 -- common/autotest_common.sh@905 -- # return 0 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:53.994 21:44:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:53.995 21:44:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:54.254 21:44:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:54.254 "name": "Existed_Raid", 00:21:54.254 "uuid": "9b6d7bdd-012e-4eeb-a93a-d061617a5d16", 00:21:54.254 "strip_size_kb": 64, 00:21:54.254 "state": "configuring", 00:21:54.254 "raid_level": "raid5f", 00:21:54.254 "superblock": true, 00:21:54.254 "num_base_bdevs": 3, 00:21:54.254 "num_base_bdevs_discovered": 1, 00:21:54.254 "num_base_bdevs_operational": 3, 00:21:54.254 "base_bdevs_list": [ 00:21:54.254 { 00:21:54.254 "name": "BaseBdev1", 00:21:54.254 "uuid": "ab2736c4-6c29-47e7-a111-e44e966de965", 00:21:54.254 "is_configured": true, 00:21:54.254 "data_offset": 2048, 00:21:54.254 "data_size": 63488 00:21:54.254 }, 00:21:54.254 { 00:21:54.254 "name": "BaseBdev2", 00:21:54.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.254 "is_configured": false, 00:21:54.254 "data_offset": 0, 00:21:54.254 "data_size": 0 00:21:54.254 }, 00:21:54.254 { 00:21:54.254 "name": "BaseBdev3", 00:21:54.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:54.254 "is_configured": false, 00:21:54.254 "data_offset": 0, 00:21:54.254 "data_size": 0 00:21:54.254 } 00:21:54.254 ] 00:21:54.254 }' 00:21:54.254 21:44:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:54.254 21:44:14 -- common/autotest_common.sh@10 -- # set +x 00:21:54.513 21:44:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:21:54.778 [2024-12-06 21:44:15.015561] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:21:54.778 [2024-12-06 21:44:15.015609] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:21:54.778 21:44:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:21:54.778 21:44:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:55.037 21:44:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:55.296 BaseBdev1 00:21:55.296 21:44:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:21:55.296 21:44:15 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:21:55.296 21:44:15 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:55.296 21:44:15 -- common/autotest_common.sh@899 -- # local i 00:21:55.296 21:44:15 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:55.296 21:44:15 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:55.296 21:44:15 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:55.296 21:44:15 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:21:55.553 [ 00:21:55.553 { 00:21:55.553 "name": "BaseBdev1", 00:21:55.553 "aliases": [ 00:21:55.553 "ddb413fb-5e61-4f38-93fa-5d01ed45a003" 00:21:55.553 ], 00:21:55.553 "product_name": "Malloc disk", 00:21:55.553 "block_size": 512, 00:21:55.553 "num_blocks": 65536, 00:21:55.553 "uuid": "ddb413fb-5e61-4f38-93fa-5d01ed45a003", 00:21:55.553 "assigned_rate_limits": { 00:21:55.553 "rw_ios_per_sec": 0, 00:21:55.553 "rw_mbytes_per_sec": 0, 00:21:55.553 "r_mbytes_per_sec": 0, 00:21:55.553 "w_mbytes_per_sec": 0 00:21:55.553 }, 00:21:55.553 "claimed": false, 00:21:55.553 "zoned": false, 00:21:55.554 "supported_io_types": { 00:21:55.554 "read": true, 00:21:55.554 "write": true, 00:21:55.554 "unmap": true, 00:21:55.554 "write_zeroes": true, 00:21:55.554 "flush": true, 00:21:55.554 "reset": true, 00:21:55.554 "compare": false, 00:21:55.554 "compare_and_write": false, 00:21:55.554 "abort": true, 00:21:55.554 "nvme_admin": false, 00:21:55.554 "nvme_io": false 00:21:55.554 }, 00:21:55.554 "memory_domains": [ 00:21:55.554 { 00:21:55.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:55.554 "dma_device_type": 2 00:21:55.554 } 00:21:55.554 ], 00:21:55.554 "driver_specific": {} 00:21:55.554 } 00:21:55.554 ] 00:21:55.554 21:44:15 -- common/autotest_common.sh@905 -- # return 0 00:21:55.554 21:44:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:21:55.811 [2024-12-06 21:44:16.118710] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:55.811 [2024-12-06 21:44:16.120517] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:21:55.811 [2024-12-06 21:44:16.120563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:21:55.811 [2024-12-06 21:44:16.120576] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:21:55.812 [2024-12-06 21:44:16.120588] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.812 21:44:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:56.070 21:44:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:56.070 "name": "Existed_Raid", 00:21:56.070 "uuid": "42cda9b4-ec21-4780-8e6e-823602cb58b3", 00:21:56.070 "strip_size_kb": 64, 00:21:56.070 "state": "configuring", 00:21:56.070 "raid_level": "raid5f", 00:21:56.070 "superblock": true, 00:21:56.070 "num_base_bdevs": 3, 00:21:56.070 "num_base_bdevs_discovered": 1, 00:21:56.070 "num_base_bdevs_operational": 3, 00:21:56.070 "base_bdevs_list": [ 00:21:56.070 { 00:21:56.070 "name": "BaseBdev1", 00:21:56.070 "uuid": "ddb413fb-5e61-4f38-93fa-5d01ed45a003", 00:21:56.070 "is_configured": true, 00:21:56.070 "data_offset": 2048, 00:21:56.070 "data_size": 63488 00:21:56.070 }, 00:21:56.070 { 00:21:56.070 "name": "BaseBdev2", 00:21:56.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.070 "is_configured": false, 00:21:56.070 "data_offset": 0, 00:21:56.070 "data_size": 0 00:21:56.070 }, 00:21:56.070 { 00:21:56.070 "name": "BaseBdev3", 00:21:56.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.070 "is_configured": false, 00:21:56.070 "data_offset": 0, 00:21:56.070 "data_size": 0 00:21:56.070 } 00:21:56.070 ] 00:21:56.070 }' 00:21:56.070 21:44:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:56.070 21:44:16 -- common/autotest_common.sh@10 -- # set +x 00:21:56.329 21:44:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:56.588 BaseBdev2 00:21:56.588 [2024-12-06 21:44:16.925622] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:56.588 21:44:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:21:56.588 21:44:16 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:21:56.588 21:44:16 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:56.588 21:44:16 -- common/autotest_common.sh@899 -- # local i 00:21:56.588 21:44:16 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:56.588 21:44:16 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:56.588 21:44:16 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:56.848 21:44:17 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:21:56.848 [ 00:21:56.848 { 00:21:56.848 "name": "BaseBdev2", 00:21:56.848 "aliases": [ 00:21:56.848 "ca21d9bf-ed14-4c94-95b0-d5ca11630961" 00:21:56.848 ], 00:21:56.848 "product_name": "Malloc disk", 00:21:56.848 "block_size": 512, 00:21:56.848 "num_blocks": 65536, 00:21:56.848 "uuid": "ca21d9bf-ed14-4c94-95b0-d5ca11630961", 00:21:56.848 "assigned_rate_limits": { 00:21:56.848 "rw_ios_per_sec": 0, 00:21:56.848 "rw_mbytes_per_sec": 0, 00:21:56.848 "r_mbytes_per_sec": 0, 00:21:56.848 "w_mbytes_per_sec": 0 00:21:56.848 }, 00:21:56.848 "claimed": true, 00:21:56.848 "claim_type": "exclusive_write", 00:21:56.848 "zoned": false, 00:21:56.848 "supported_io_types": { 00:21:56.848 "read": true, 00:21:56.848 "write": true, 00:21:56.848 "unmap": true, 00:21:56.848 "write_zeroes": true, 00:21:56.848 "flush": true, 00:21:56.848 "reset": true, 00:21:56.848 "compare": false, 00:21:56.848 "compare_and_write": false, 00:21:56.848 "abort": true, 00:21:56.848 "nvme_admin": false, 00:21:56.848 "nvme_io": false 00:21:56.848 }, 00:21:56.848 "memory_domains": [ 00:21:56.848 { 00:21:56.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:56.848 "dma_device_type": 2 00:21:56.848 } 00:21:56.848 ], 00:21:56.848 "driver_specific": {} 00:21:56.848 } 00:21:56.848 ] 00:21:57.107 21:44:17 -- common/autotest_common.sh@905 -- # return 0 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:57.107 "name": "Existed_Raid", 00:21:57.107 "uuid": "42cda9b4-ec21-4780-8e6e-823602cb58b3", 00:21:57.107 "strip_size_kb": 64, 00:21:57.107 "state": "configuring", 00:21:57.107 "raid_level": "raid5f", 00:21:57.107 "superblock": true, 00:21:57.107 "num_base_bdevs": 3, 00:21:57.107 "num_base_bdevs_discovered": 2, 00:21:57.107 "num_base_bdevs_operational": 3, 00:21:57.107 "base_bdevs_list": [ 00:21:57.107 { 00:21:57.107 "name": "BaseBdev1", 00:21:57.107 "uuid": "ddb413fb-5e61-4f38-93fa-5d01ed45a003", 00:21:57.107 "is_configured": true, 00:21:57.107 "data_offset": 2048, 00:21:57.107 "data_size": 63488 00:21:57.107 }, 00:21:57.107 { 00:21:57.107 "name": "BaseBdev2", 00:21:57.107 "uuid": "ca21d9bf-ed14-4c94-95b0-d5ca11630961", 00:21:57.107 "is_configured": true, 00:21:57.107 "data_offset": 2048, 00:21:57.107 "data_size": 63488 00:21:57.107 }, 00:21:57.107 { 00:21:57.107 "name": "BaseBdev3", 00:21:57.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:57.107 "is_configured": false, 00:21:57.107 "data_offset": 0, 00:21:57.107 "data_size": 0 00:21:57.107 } 00:21:57.107 ] 00:21:57.107 }' 00:21:57.107 21:44:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:57.366 21:44:17 -- common/autotest_common.sh@10 -- # set +x 00:21:57.625 21:44:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:57.625 [2024-12-06 21:44:18.089259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:57.625 [2024-12-06 21:44:18.089705] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:21:57.625 [2024-12-06 21:44:18.089846] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:21:57.625 [2024-12-06 21:44:18.090014] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:21:57.625 BaseBdev3 00:21:57.625 [2024-12-06 21:44:18.094702] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:21:57.625 [2024-12-06 21:44:18.094861] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:21:57.625 [2024-12-06 21:44:18.095173] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:57.625 21:44:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:21:57.625 21:44:18 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:21:57.625 21:44:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:21:57.625 21:44:18 -- common/autotest_common.sh@899 -- # local i 00:21:57.625 21:44:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:21:57.625 21:44:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:21:57.625 21:44:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:21:57.886 21:44:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:21:58.165 [ 00:21:58.165 { 00:21:58.165 "name": "BaseBdev3", 00:21:58.166 "aliases": [ 00:21:58.166 "2ab91aee-3bac-44e5-97b1-f4dcb910d8e4" 00:21:58.166 ], 00:21:58.166 "product_name": "Malloc disk", 00:21:58.166 "block_size": 512, 00:21:58.166 "num_blocks": 65536, 00:21:58.166 "uuid": "2ab91aee-3bac-44e5-97b1-f4dcb910d8e4", 00:21:58.166 "assigned_rate_limits": { 00:21:58.166 "rw_ios_per_sec": 0, 00:21:58.166 "rw_mbytes_per_sec": 0, 00:21:58.166 "r_mbytes_per_sec": 0, 00:21:58.166 "w_mbytes_per_sec": 0 00:21:58.166 }, 00:21:58.166 "claimed": true, 00:21:58.166 "claim_type": "exclusive_write", 00:21:58.166 "zoned": false, 00:21:58.166 "supported_io_types": { 00:21:58.166 "read": true, 00:21:58.166 "write": true, 00:21:58.166 "unmap": true, 00:21:58.166 "write_zeroes": true, 00:21:58.166 "flush": true, 00:21:58.166 "reset": true, 00:21:58.166 "compare": false, 00:21:58.166 "compare_and_write": false, 00:21:58.166 "abort": true, 00:21:58.166 "nvme_admin": false, 00:21:58.166 "nvme_io": false 00:21:58.166 }, 00:21:58.166 "memory_domains": [ 00:21:58.166 { 00:21:58.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:21:58.166 "dma_device_type": 2 00:21:58.166 } 00:21:58.166 ], 00:21:58.166 "driver_specific": {} 00:21:58.166 } 00:21:58.166 ] 00:21:58.166 21:44:18 -- common/autotest_common.sh@905 -- # return 0 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.166 21:44:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.442 21:44:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.442 "name": "Existed_Raid", 00:21:58.442 "uuid": "42cda9b4-ec21-4780-8e6e-823602cb58b3", 00:21:58.442 "strip_size_kb": 64, 00:21:58.442 "state": "online", 00:21:58.442 "raid_level": "raid5f", 00:21:58.442 "superblock": true, 00:21:58.442 "num_base_bdevs": 3, 00:21:58.442 "num_base_bdevs_discovered": 3, 00:21:58.442 "num_base_bdevs_operational": 3, 00:21:58.442 "base_bdevs_list": [ 00:21:58.442 { 00:21:58.442 "name": "BaseBdev1", 00:21:58.442 "uuid": "ddb413fb-5e61-4f38-93fa-5d01ed45a003", 00:21:58.442 "is_configured": true, 00:21:58.442 "data_offset": 2048, 00:21:58.442 "data_size": 63488 00:21:58.442 }, 00:21:58.442 { 00:21:58.442 "name": "BaseBdev2", 00:21:58.442 "uuid": "ca21d9bf-ed14-4c94-95b0-d5ca11630961", 00:21:58.442 "is_configured": true, 00:21:58.442 "data_offset": 2048, 00:21:58.442 "data_size": 63488 00:21:58.442 }, 00:21:58.442 { 00:21:58.442 "name": "BaseBdev3", 00:21:58.442 "uuid": "2ab91aee-3bac-44e5-97b1-f4dcb910d8e4", 00:21:58.442 "is_configured": true, 00:21:58.442 "data_offset": 2048, 00:21:58.442 "data_size": 63488 00:21:58.442 } 00:21:58.442 ] 00:21:58.442 }' 00:21:58.442 21:44:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.442 21:44:18 -- common/autotest_common.sh@10 -- # set +x 00:21:58.716 21:44:18 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:21:58.716 [2024-12-06 21:44:19.143909] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:58.997 "name": "Existed_Raid", 00:21:58.997 "uuid": "42cda9b4-ec21-4780-8e6e-823602cb58b3", 00:21:58.997 "strip_size_kb": 64, 00:21:58.997 "state": "online", 00:21:58.997 "raid_level": "raid5f", 00:21:58.997 "superblock": true, 00:21:58.997 "num_base_bdevs": 3, 00:21:58.997 "num_base_bdevs_discovered": 2, 00:21:58.997 "num_base_bdevs_operational": 2, 00:21:58.997 "base_bdevs_list": [ 00:21:58.997 { 00:21:58.997 "name": null, 00:21:58.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:58.997 "is_configured": false, 00:21:58.997 "data_offset": 2048, 00:21:58.997 "data_size": 63488 00:21:58.997 }, 00:21:58.997 { 00:21:58.997 "name": "BaseBdev2", 00:21:58.997 "uuid": "ca21d9bf-ed14-4c94-95b0-d5ca11630961", 00:21:58.997 "is_configured": true, 00:21:58.997 "data_offset": 2048, 00:21:58.997 "data_size": 63488 00:21:58.997 }, 00:21:58.997 { 00:21:58.997 "name": "BaseBdev3", 00:21:58.997 "uuid": "2ab91aee-3bac-44e5-97b1-f4dcb910d8e4", 00:21:58.997 "is_configured": true, 00:21:58.997 "data_offset": 2048, 00:21:58.997 "data_size": 63488 00:21:58.997 } 00:21:58.997 ] 00:21:58.997 }' 00:21:58.997 21:44:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:58.997 21:44:19 -- common/autotest_common.sh@10 -- # set +x 00:21:59.256 21:44:19 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:21:59.256 21:44:19 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:59.256 21:44:19 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.256 21:44:19 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:21:59.515 21:44:19 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:21:59.515 21:44:19 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:21:59.515 21:44:19 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:21:59.774 [2024-12-06 21:44:20.162816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:59.774 [2024-12-06 21:44:20.162847] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:59.774 [2024-12-06 21:44:20.162905] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:59.774 21:44:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:21:59.774 21:44:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:21:59.774 21:44:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:59.774 21:44:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:00.032 21:44:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:00.032 21:44:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:00.032 21:44:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:00.291 [2024-12-06 21:44:20.696169] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:00.291 [2024-12-06 21:44:20.696409] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:22:00.291 21:44:20 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:00.291 21:44:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:00.291 21:44:20 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:00.291 21:44:20 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:00.551 21:44:20 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:00.551 21:44:20 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:00.551 21:44:20 -- bdev/bdev_raid.sh@287 -- # killprocess 82455 00:22:00.551 21:44:20 -- common/autotest_common.sh@936 -- # '[' -z 82455 ']' 00:22:00.551 21:44:20 -- common/autotest_common.sh@940 -- # kill -0 82455 00:22:00.551 21:44:20 -- common/autotest_common.sh@941 -- # uname 00:22:00.551 21:44:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:00.551 21:44:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82455 00:22:00.551 killing process with pid 82455 00:22:00.551 21:44:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:00.551 21:44:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:00.551 21:44:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82455' 00:22:00.551 21:44:20 -- common/autotest_common.sh@955 -- # kill 82455 00:22:00.551 [2024-12-06 21:44:20.995077] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:00.551 21:44:20 -- common/autotest_common.sh@960 -- # wait 82455 00:22:00.551 [2024-12-06 21:44:20.995173] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:01.488 ************************************ 00:22:01.488 END TEST raid5f_state_function_test_sb 00:22:01.488 ************************************ 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:01.488 00:22:01.488 real 0m10.414s 00:22:01.488 user 0m17.369s 00:22:01.488 sys 0m1.545s 00:22:01.488 21:44:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:01.488 21:44:21 -- common/autotest_common.sh@10 -- # set +x 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:22:01.488 21:44:21 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:22:01.488 21:44:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.488 21:44:21 -- common/autotest_common.sh@10 -- # set +x 00:22:01.488 ************************************ 00:22:01.488 START TEST raid5f_superblock_test 00:22:01.488 ************************************ 00:22:01.488 21:44:21 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 3 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@357 -- # raid_pid=82798 00:22:01.488 21:44:21 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:22:01.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:01.748 21:44:21 -- bdev/bdev_raid.sh@358 -- # waitforlisten 82798 /var/tmp/spdk-raid.sock 00:22:01.748 21:44:21 -- common/autotest_common.sh@829 -- # '[' -z 82798 ']' 00:22:01.748 21:44:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:01.748 21:44:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.748 21:44:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:01.748 21:44:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.748 21:44:21 -- common/autotest_common.sh@10 -- # set +x 00:22:01.748 [2024-12-06 21:44:22.029230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:01.748 [2024-12-06 21:44:22.029359] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82798 ] 00:22:01.748 [2024-12-06 21:44:22.179425] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.007 [2024-12-06 21:44:22.331395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.007 [2024-12-06 21:44:22.472726] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:02.575 21:44:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.575 21:44:22 -- common/autotest_common.sh@862 -- # return 0 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:02.575 21:44:22 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:22:02.834 malloc1 00:22:02.834 21:44:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:02.834 [2024-12-06 21:44:23.320353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:02.834 [2024-12-06 21:44:23.320620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:02.834 [2024-12-06 21:44:23.320700] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:02.834 [2024-12-06 21:44:23.320860] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:02.834 [2024-12-06 21:44:23.323018] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:02.834 [2024-12-06 21:44:23.323192] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:02.834 pt1 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:03.093 21:44:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:22:03.353 malloc2 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:03.353 [2024-12-06 21:44:23.782079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:03.353 [2024-12-06 21:44:23.782158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.353 [2024-12-06 21:44:23.782205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:03.353 [2024-12-06 21:44:23.782220] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.353 [2024-12-06 21:44:23.784520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.353 [2024-12-06 21:44:23.784558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:03.353 pt2 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:22:03.353 21:44:23 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:22:03.612 malloc3 00:22:03.612 21:44:23 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:03.871 [2024-12-06 21:44:24.166684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:03.871 [2024-12-06 21:44:24.166759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:03.871 [2024-12-06 21:44:24.166790] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:03.871 [2024-12-06 21:44:24.166803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:03.871 [2024-12-06 21:44:24.168978] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:03.871 [2024-12-06 21:44:24.169016] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:03.871 pt3 00:22:03.871 21:44:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:22:03.871 21:44:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:22:03.871 21:44:24 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:22:03.871 [2024-12-06 21:44:24.347052] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:03.871 [2024-12-06 21:44:24.348912] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:03.871 [2024-12-06 21:44:24.348983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:03.871 [2024-12-06 21:44:24.349170] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:22:03.871 [2024-12-06 21:44:24.349189] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:03.871 [2024-12-06 21:44:24.349293] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d0000056c0 00:22:03.871 [2024-12-06 21:44:24.353697] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:22:03.871 [2024-12-06 21:44:24.353836] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:22:03.871 [2024-12-06 21:44:24.354161] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:03.872 21:44:24 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:03.872 21:44:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:03.872 21:44:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:04.132 "name": "raid_bdev1", 00:22:04.132 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:04.132 "strip_size_kb": 64, 00:22:04.132 "state": "online", 00:22:04.132 "raid_level": "raid5f", 00:22:04.132 "superblock": true, 00:22:04.132 "num_base_bdevs": 3, 00:22:04.132 "num_base_bdevs_discovered": 3, 00:22:04.132 "num_base_bdevs_operational": 3, 00:22:04.132 "base_bdevs_list": [ 00:22:04.132 { 00:22:04.132 "name": "pt1", 00:22:04.132 "uuid": "827745aa-d3f8-5d2f-8497-42567595c8c6", 00:22:04.132 "is_configured": true, 00:22:04.132 "data_offset": 2048, 00:22:04.132 "data_size": 63488 00:22:04.132 }, 00:22:04.132 { 00:22:04.132 "name": "pt2", 00:22:04.132 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:04.132 "is_configured": true, 00:22:04.132 "data_offset": 2048, 00:22:04.132 "data_size": 63488 00:22:04.132 }, 00:22:04.132 { 00:22:04.132 "name": "pt3", 00:22:04.132 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:04.132 "is_configured": true, 00:22:04.132 "data_offset": 2048, 00:22:04.132 "data_size": 63488 00:22:04.132 } 00:22:04.132 ] 00:22:04.132 }' 00:22:04.132 21:44:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:04.132 21:44:24 -- common/autotest_common.sh@10 -- # set +x 00:22:04.391 21:44:24 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:04.391 21:44:24 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:22:04.668 [2024-12-06 21:44:25.038776] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:04.668 21:44:25 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5d751e0f-f0e3-4771-94a6-517adb948856 00:22:04.668 21:44:25 -- bdev/bdev_raid.sh@380 -- # '[' -z 5d751e0f-f0e3-4771-94a6-517adb948856 ']' 00:22:04.668 21:44:25 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:04.933 [2024-12-06 21:44:25.230709] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:04.933 [2024-12-06 21:44:25.230737] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:04.933 [2024-12-06 21:44:25.230813] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:04.933 [2024-12-06 21:44:25.230888] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:04.933 [2024-12-06 21:44:25.230905] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:22:04.933 21:44:25 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.933 21:44:25 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:05.191 21:44:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:05.450 21:44:25 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:22:05.450 21:44:25 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:05.708 21:44:26 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:22:05.709 21:44:26 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:22:05.967 21:44:26 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:22:05.967 21:44:26 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.967 21:44:26 -- common/autotest_common.sh@650 -- # local es=0 00:22:05.967 21:44:26 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.967 21:44:26 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.967 21:44:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.967 21:44:26 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.967 21:44:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.967 21:44:26 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.967 21:44:26 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:05.967 21:44:26 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:05.967 21:44:26 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:05.967 21:44:26 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:22:05.967 [2024-12-06 21:44:26.455087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:22:05.967 [2024-12-06 21:44:26.457015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:22:05.967 [2024-12-06 21:44:26.457195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:22:05.967 [2024-12-06 21:44:26.457297] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:22:05.967 [2024-12-06 21:44:26.457624] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:22:05.967 [2024-12-06 21:44:26.457663] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:22:05.967 [2024-12-06 21:44:26.457683] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:05.967 [2024-12-06 21:44:26.457698] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state configuring 00:22:05.967 request: 00:22:05.967 { 00:22:05.967 "name": "raid_bdev1", 00:22:05.967 "raid_level": "raid5f", 00:22:05.967 "base_bdevs": [ 00:22:05.967 "malloc1", 00:22:05.967 "malloc2", 00:22:05.967 "malloc3" 00:22:05.967 ], 00:22:05.967 "superblock": false, 00:22:05.967 "strip_size_kb": 64, 00:22:05.967 "method": "bdev_raid_create", 00:22:05.967 "req_id": 1 00:22:05.967 } 00:22:05.967 Got JSON-RPC error response 00:22:05.967 response: 00:22:05.967 { 00:22:05.967 "code": -17, 00:22:05.967 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:22:05.967 } 00:22:06.226 21:44:26 -- common/autotest_common.sh@653 -- # es=1 00:22:06.226 21:44:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:06.226 21:44:26 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:06.226 21:44:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:06.226 21:44:26 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.226 21:44:26 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:22:06.226 21:44:26 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:22:06.226 21:44:26 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:22:06.226 21:44:26 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:06.484 [2024-12-06 21:44:26.907122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:06.484 [2024-12-06 21:44:26.907342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:06.484 [2024-12-06 21:44:26.907407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:22:06.484 [2024-12-06 21:44:26.907557] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:06.484 [2024-12-06 21:44:26.909741] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:06.484 [2024-12-06 21:44:26.909951] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:06.484 [2024-12-06 21:44:26.910159] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:06.484 [2024-12-06 21:44:26.910324] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:06.484 pt1 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.484 21:44:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.742 21:44:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:06.742 "name": "raid_bdev1", 00:22:06.742 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:06.742 "strip_size_kb": 64, 00:22:06.742 "state": "configuring", 00:22:06.742 "raid_level": "raid5f", 00:22:06.742 "superblock": true, 00:22:06.743 "num_base_bdevs": 3, 00:22:06.743 "num_base_bdevs_discovered": 1, 00:22:06.743 "num_base_bdevs_operational": 3, 00:22:06.743 "base_bdevs_list": [ 00:22:06.743 { 00:22:06.743 "name": "pt1", 00:22:06.743 "uuid": "827745aa-d3f8-5d2f-8497-42567595c8c6", 00:22:06.743 "is_configured": true, 00:22:06.743 "data_offset": 2048, 00:22:06.743 "data_size": 63488 00:22:06.743 }, 00:22:06.743 { 00:22:06.743 "name": null, 00:22:06.743 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:06.743 "is_configured": false, 00:22:06.743 "data_offset": 2048, 00:22:06.743 "data_size": 63488 00:22:06.743 }, 00:22:06.743 { 00:22:06.743 "name": null, 00:22:06.743 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:06.743 "is_configured": false, 00:22:06.743 "data_offset": 2048, 00:22:06.743 "data_size": 63488 00:22:06.743 } 00:22:06.743 ] 00:22:06.743 }' 00:22:06.743 21:44:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:06.743 21:44:27 -- common/autotest_common.sh@10 -- # set +x 00:22:07.001 21:44:27 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:22:07.001 21:44:27 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:07.259 [2024-12-06 21:44:27.575277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:07.259 [2024-12-06 21:44:27.575345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:07.259 [2024-12-06 21:44:27.575372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009c80 00:22:07.259 [2024-12-06 21:44:27.575387] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:07.259 [2024-12-06 21:44:27.575911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:07.259 [2024-12-06 21:44:27.575946] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:07.259 [2024-12-06 21:44:27.576055] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:07.259 [2024-12-06 21:44:27.576087] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:07.259 pt2 00:22:07.259 21:44:27 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:07.516 [2024-12-06 21:44:27.827329] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:22:07.516 21:44:27 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:07.516 21:44:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:07.516 21:44:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:07.517 21:44:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:07.775 21:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:07.775 "name": "raid_bdev1", 00:22:07.775 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:07.775 "strip_size_kb": 64, 00:22:07.775 "state": "configuring", 00:22:07.775 "raid_level": "raid5f", 00:22:07.775 "superblock": true, 00:22:07.775 "num_base_bdevs": 3, 00:22:07.775 "num_base_bdevs_discovered": 1, 00:22:07.775 "num_base_bdevs_operational": 3, 00:22:07.775 "base_bdevs_list": [ 00:22:07.775 { 00:22:07.775 "name": "pt1", 00:22:07.775 "uuid": "827745aa-d3f8-5d2f-8497-42567595c8c6", 00:22:07.775 "is_configured": true, 00:22:07.775 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 }, 00:22:07.775 { 00:22:07.775 "name": null, 00:22:07.775 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:07.775 "is_configured": false, 00:22:07.775 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 }, 00:22:07.775 { 00:22:07.775 "name": null, 00:22:07.775 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:07.775 "is_configured": false, 00:22:07.775 "data_offset": 2048, 00:22:07.775 "data_size": 63488 00:22:07.775 } 00:22:07.775 ] 00:22:07.775 }' 00:22:07.775 21:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:07.775 21:44:28 -- common/autotest_common.sh@10 -- # set +x 00:22:08.034 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:22:08.034 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.034 21:44:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:08.293 [2024-12-06 21:44:28.547521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:08.293 [2024-12-06 21:44:28.547779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.293 [2024-12-06 21:44:28.547821] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:22:08.293 [2024-12-06 21:44:28.547835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.293 [2024-12-06 21:44:28.548391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.293 [2024-12-06 21:44:28.548436] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:08.293 [2024-12-06 21:44:28.548619] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:08.293 [2024-12-06 21:44:28.548645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:08.293 pt2 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:08.293 [2024-12-06 21:44:28.739562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:08.293 [2024-12-06 21:44:28.739787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:08.293 [2024-12-06 21:44:28.739829] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a280 00:22:08.293 [2024-12-06 21:44:28.739857] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:08.293 [2024-12-06 21:44:28.740368] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:08.293 [2024-12-06 21:44:28.740389] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:08.293 [2024-12-06 21:44:28.740493] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:08.293 [2024-12-06 21:44:28.740570] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:08.293 [2024-12-06 21:44:28.740722] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:22:08.293 [2024-12-06 21:44:28.740736] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:08.293 [2024-12-06 21:44:28.740830] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:08.293 pt3 00:22:08.293 [2024-12-06 21:44:28.745076] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:22:08.293 [2024-12-06 21:44:28.745100] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:22:08.293 [2024-12-06 21:44:28.745259] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.293 21:44:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.552 21:44:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:08.552 "name": "raid_bdev1", 00:22:08.552 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:08.552 "strip_size_kb": 64, 00:22:08.552 "state": "online", 00:22:08.552 "raid_level": "raid5f", 00:22:08.552 "superblock": true, 00:22:08.552 "num_base_bdevs": 3, 00:22:08.552 "num_base_bdevs_discovered": 3, 00:22:08.552 "num_base_bdevs_operational": 3, 00:22:08.552 "base_bdevs_list": [ 00:22:08.552 { 00:22:08.552 "name": "pt1", 00:22:08.552 "uuid": "827745aa-d3f8-5d2f-8497-42567595c8c6", 00:22:08.552 "is_configured": true, 00:22:08.552 "data_offset": 2048, 00:22:08.552 "data_size": 63488 00:22:08.552 }, 00:22:08.552 { 00:22:08.552 "name": "pt2", 00:22:08.552 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:08.552 "is_configured": true, 00:22:08.552 "data_offset": 2048, 00:22:08.552 "data_size": 63488 00:22:08.552 }, 00:22:08.552 { 00:22:08.552 "name": "pt3", 00:22:08.552 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:08.552 "is_configured": true, 00:22:08.552 "data_offset": 2048, 00:22:08.552 "data_size": 63488 00:22:08.552 } 00:22:08.552 ] 00:22:08.552 }' 00:22:08.552 21:44:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:08.552 21:44:28 -- common/autotest_common.sh@10 -- # set +x 00:22:08.811 21:44:29 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:22:08.811 21:44:29 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:09.070 [2024-12-06 21:44:29.442043] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:09.070 21:44:29 -- bdev/bdev_raid.sh@430 -- # '[' 5d751e0f-f0e3-4771-94a6-517adb948856 '!=' 5d751e0f-f0e3-4771-94a6-517adb948856 ']' 00:22:09.070 21:44:29 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:22:09.070 21:44:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:09.070 21:44:29 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:09.070 21:44:29 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:22:09.330 [2024-12-06 21:44:29.682018] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.330 21:44:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.589 21:44:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:09.589 "name": "raid_bdev1", 00:22:09.589 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:09.589 "strip_size_kb": 64, 00:22:09.589 "state": "online", 00:22:09.589 "raid_level": "raid5f", 00:22:09.589 "superblock": true, 00:22:09.589 "num_base_bdevs": 3, 00:22:09.589 "num_base_bdevs_discovered": 2, 00:22:09.589 "num_base_bdevs_operational": 2, 00:22:09.589 "base_bdevs_list": [ 00:22:09.589 { 00:22:09.589 "name": null, 00:22:09.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.589 "is_configured": false, 00:22:09.589 "data_offset": 2048, 00:22:09.589 "data_size": 63488 00:22:09.589 }, 00:22:09.589 { 00:22:09.589 "name": "pt2", 00:22:09.589 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:09.589 "is_configured": true, 00:22:09.589 "data_offset": 2048, 00:22:09.589 "data_size": 63488 00:22:09.589 }, 00:22:09.589 { 00:22:09.589 "name": "pt3", 00:22:09.589 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:09.589 "is_configured": true, 00:22:09.589 "data_offset": 2048, 00:22:09.589 "data_size": 63488 00:22:09.589 } 00:22:09.589 ] 00:22:09.589 }' 00:22:09.589 21:44:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:09.589 21:44:30 -- common/autotest_common.sh@10 -- # set +x 00:22:09.849 21:44:30 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:10.108 [2024-12-06 21:44:30.502171] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:10.108 [2024-12-06 21:44:30.502201] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:10.108 [2024-12-06 21:44:30.502273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:10.108 [2024-12-06 21:44:30.502335] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:10.108 [2024-12-06 21:44:30.502351] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:22:10.108 21:44:30 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.108 21:44:30 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:22:10.367 21:44:30 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:22:10.367 21:44:30 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:22:10.367 21:44:30 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:22:10.367 21:44:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:10.367 21:44:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:10.626 21:44:30 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:10.626 21:44:30 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:10.626 21:44:30 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:10.883 21:44:31 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:22:10.883 21:44:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:22:10.883 21:44:31 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:22:10.883 21:44:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:10.883 21:44:31 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:10.883 [2024-12-06 21:44:31.354322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:10.883 [2024-12-06 21:44:31.354388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:10.883 [2024-12-06 21:44:31.354414] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a580 00:22:10.883 [2024-12-06 21:44:31.354430] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:10.883 [2024-12-06 21:44:31.356599] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:10.883 [2024-12-06 21:44:31.356641] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:10.883 [2024-12-06 21:44:31.356728] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:10.884 [2024-12-06 21:44:31.356778] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:10.884 pt2 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.884 21:44:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.142 21:44:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.142 "name": "raid_bdev1", 00:22:11.142 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:11.142 "strip_size_kb": 64, 00:22:11.142 "state": "configuring", 00:22:11.142 "raid_level": "raid5f", 00:22:11.142 "superblock": true, 00:22:11.142 "num_base_bdevs": 3, 00:22:11.142 "num_base_bdevs_discovered": 1, 00:22:11.142 "num_base_bdevs_operational": 2, 00:22:11.142 "base_bdevs_list": [ 00:22:11.142 { 00:22:11.142 "name": null, 00:22:11.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.142 "is_configured": false, 00:22:11.142 "data_offset": 2048, 00:22:11.142 "data_size": 63488 00:22:11.142 }, 00:22:11.142 { 00:22:11.142 "name": "pt2", 00:22:11.142 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:11.142 "is_configured": true, 00:22:11.142 "data_offset": 2048, 00:22:11.142 "data_size": 63488 00:22:11.142 }, 00:22:11.142 { 00:22:11.142 "name": null, 00:22:11.142 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:11.142 "is_configured": false, 00:22:11.142 "data_offset": 2048, 00:22:11.142 "data_size": 63488 00:22:11.142 } 00:22:11.142 ] 00:22:11.142 }' 00:22:11.142 21:44:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.142 21:44:31 -- common/autotest_common.sh@10 -- # set +x 00:22:11.400 21:44:31 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:22:11.400 21:44:31 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:22:11.400 21:44:31 -- bdev/bdev_raid.sh@462 -- # i=2 00:22:11.400 21:44:31 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:11.658 [2024-12-06 21:44:31.958480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:11.658 [2024-12-06 21:44:31.958562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:11.658 [2024-12-06 21:44:31.958591] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:22:11.658 [2024-12-06 21:44:31.958606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:11.658 [2024-12-06 21:44:31.959062] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:11.658 [2024-12-06 21:44:31.959095] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:11.658 [2024-12-06 21:44:31.959213] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:11.658 [2024-12-06 21:44:31.959241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:11.658 [2024-12-06 21:44:31.959355] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ab80 00:22:11.658 [2024-12-06 21:44:31.959373] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:11.658 [2024-12-06 21:44:31.959462] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:22:11.659 pt3 00:22:11.659 [2024-12-06 21:44:31.963606] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ab80 00:22:11.659 [2024-12-06 21:44:31.963629] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ab80 00:22:11.659 [2024-12-06 21:44:31.963937] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:11.659 21:44:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:11.917 21:44:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:11.917 "name": "raid_bdev1", 00:22:11.917 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:11.917 "strip_size_kb": 64, 00:22:11.917 "state": "online", 00:22:11.917 "raid_level": "raid5f", 00:22:11.917 "superblock": true, 00:22:11.917 "num_base_bdevs": 3, 00:22:11.917 "num_base_bdevs_discovered": 2, 00:22:11.917 "num_base_bdevs_operational": 2, 00:22:11.917 "base_bdevs_list": [ 00:22:11.917 { 00:22:11.917 "name": null, 00:22:11.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:11.917 "is_configured": false, 00:22:11.917 "data_offset": 2048, 00:22:11.917 "data_size": 63488 00:22:11.917 }, 00:22:11.917 { 00:22:11.917 "name": "pt2", 00:22:11.917 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:11.917 "is_configured": true, 00:22:11.917 "data_offset": 2048, 00:22:11.917 "data_size": 63488 00:22:11.917 }, 00:22:11.917 { 00:22:11.917 "name": "pt3", 00:22:11.917 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:11.917 "is_configured": true, 00:22:11.917 "data_offset": 2048, 00:22:11.917 "data_size": 63488 00:22:11.917 } 00:22:11.917 ] 00:22:11.917 }' 00:22:11.917 21:44:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:11.917 21:44:32 -- common/autotest_common.sh@10 -- # set +x 00:22:12.175 21:44:32 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:22:12.175 21:44:32 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:12.433 [2024-12-06 21:44:32.712496] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:12.433 [2024-12-06 21:44:32.712528] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:12.433 [2024-12-06 21:44:32.712599] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:12.433 [2024-12-06 21:44:32.712668] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:12.433 [2024-12-06 21:44:32.712682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ab80 name raid_bdev1, state offline 00:22:12.433 21:44:32 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.433 21:44:32 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:22:12.691 21:44:32 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:22:12.691 21:44:32 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:22:12.691 21:44:32 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:22:12.948 [2024-12-06 21:44:33.194804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:22:12.948 [2024-12-06 21:44:33.194901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:12.948 [2024-12-06 21:44:33.194947] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:12.948 [2024-12-06 21:44:33.194961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:12.948 [2024-12-06 21:44:33.197308] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:12.948 [2024-12-06 21:44:33.197347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:22:12.948 [2024-12-06 21:44:33.197679] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:22:12.948 [2024-12-06 21:44:33.197739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:22:12.948 pt1 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:12.948 "name": "raid_bdev1", 00:22:12.948 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:12.948 "strip_size_kb": 64, 00:22:12.948 "state": "configuring", 00:22:12.948 "raid_level": "raid5f", 00:22:12.948 "superblock": true, 00:22:12.948 "num_base_bdevs": 3, 00:22:12.948 "num_base_bdevs_discovered": 1, 00:22:12.948 "num_base_bdevs_operational": 3, 00:22:12.948 "base_bdevs_list": [ 00:22:12.948 { 00:22:12.948 "name": "pt1", 00:22:12.948 "uuid": "827745aa-d3f8-5d2f-8497-42567595c8c6", 00:22:12.948 "is_configured": true, 00:22:12.948 "data_offset": 2048, 00:22:12.948 "data_size": 63488 00:22:12.948 }, 00:22:12.948 { 00:22:12.948 "name": null, 00:22:12.948 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:12.948 "is_configured": false, 00:22:12.948 "data_offset": 2048, 00:22:12.948 "data_size": 63488 00:22:12.948 }, 00:22:12.948 { 00:22:12.948 "name": null, 00:22:12.948 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:12.948 "is_configured": false, 00:22:12.948 "data_offset": 2048, 00:22:12.948 "data_size": 63488 00:22:12.948 } 00:22:12.948 ] 00:22:12.948 }' 00:22:12.948 21:44:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:12.948 21:44:33 -- common/autotest_common.sh@10 -- # set +x 00:22:13.205 21:44:33 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:22:13.205 21:44:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:13.205 21:44:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:22:13.462 21:44:33 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:13.462 21:44:33 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:13.462 21:44:33 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:22:13.720 21:44:34 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:22:13.720 21:44:34 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:22:13.720 21:44:34 -- bdev/bdev_raid.sh@489 -- # i=2 00:22:13.720 21:44:34 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:22:13.977 [2024-12-06 21:44:34.239153] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:22:13.977 [2024-12-06 21:44:34.239213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:13.977 [2024-12-06 21:44:34.239241] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:22:13.977 [2024-12-06 21:44:34.239253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:13.977 [2024-12-06 21:44:34.239734] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:13.977 [2024-12-06 21:44:34.239766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:22:13.977 [2024-12-06 21:44:34.240250] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:22:13.977 [2024-12-06 21:44:34.240291] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:22:13.977 [2024-12-06 21:44:34.240308] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:13.977 [2024-12-06 21:44:34.240331] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b780 name raid_bdev1, state configuring 00:22:13.977 [2024-12-06 21:44:34.240412] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:22:13.977 pt3 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:13.977 "name": "raid_bdev1", 00:22:13.977 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:13.977 "strip_size_kb": 64, 00:22:13.977 "state": "configuring", 00:22:13.977 "raid_level": "raid5f", 00:22:13.977 "superblock": true, 00:22:13.977 "num_base_bdevs": 3, 00:22:13.977 "num_base_bdevs_discovered": 1, 00:22:13.977 "num_base_bdevs_operational": 2, 00:22:13.977 "base_bdevs_list": [ 00:22:13.977 { 00:22:13.977 "name": null, 00:22:13.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.977 "is_configured": false, 00:22:13.977 "data_offset": 2048, 00:22:13.977 "data_size": 63488 00:22:13.977 }, 00:22:13.977 { 00:22:13.977 "name": null, 00:22:13.977 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:13.977 "is_configured": false, 00:22:13.977 "data_offset": 2048, 00:22:13.977 "data_size": 63488 00:22:13.977 }, 00:22:13.977 { 00:22:13.977 "name": "pt3", 00:22:13.977 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:13.977 "is_configured": true, 00:22:13.977 "data_offset": 2048, 00:22:13.977 "data_size": 63488 00:22:13.977 } 00:22:13.977 ] 00:22:13.977 }' 00:22:13.977 21:44:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:13.977 21:44:34 -- common/autotest_common.sh@10 -- # set +x 00:22:14.541 21:44:34 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:22:14.541 21:44:34 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:14.541 21:44:34 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:22:14.541 [2024-12-06 21:44:34.979632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:22:14.541 [2024-12-06 21:44:34.979931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:14.541 [2024-12-06 21:44:34.979968] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:22:14.541 [2024-12-06 21:44:34.979985] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:14.541 [2024-12-06 21:44:34.980520] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:14.541 [2024-12-06 21:44:34.980563] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:22:14.541 [2024-12-06 21:44:34.980666] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:22:14.541 [2024-12-06 21:44:34.980702] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:22:14.541 [2024-12-06 21:44:34.981136] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000bd80 00:22:14.542 [2024-12-06 21:44:34.981164] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:14.542 [2024-12-06 21:44:34.981271] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:14.542 pt2 00:22:14.542 [2024-12-06 21:44:34.985503] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000bd80 00:22:14.542 [2024-12-06 21:44:34.985524] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000bd80 00:22:14.542 [2024-12-06 21:44:34.985758] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.542 21:44:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.799 21:44:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.799 "name": "raid_bdev1", 00:22:14.799 "uuid": "5d751e0f-f0e3-4771-94a6-517adb948856", 00:22:14.799 "strip_size_kb": 64, 00:22:14.799 "state": "online", 00:22:14.799 "raid_level": "raid5f", 00:22:14.799 "superblock": true, 00:22:14.799 "num_base_bdevs": 3, 00:22:14.799 "num_base_bdevs_discovered": 2, 00:22:14.799 "num_base_bdevs_operational": 2, 00:22:14.799 "base_bdevs_list": [ 00:22:14.799 { 00:22:14.799 "name": null, 00:22:14.799 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.799 "is_configured": false, 00:22:14.799 "data_offset": 2048, 00:22:14.799 "data_size": 63488 00:22:14.799 }, 00:22:14.799 { 00:22:14.799 "name": "pt2", 00:22:14.799 "uuid": "baa46091-317c-5396-b86c-96625a58bda7", 00:22:14.799 "is_configured": true, 00:22:14.799 "data_offset": 2048, 00:22:14.799 "data_size": 63488 00:22:14.799 }, 00:22:14.799 { 00:22:14.799 "name": "pt3", 00:22:14.799 "uuid": "c829b4a0-4e8d-55c7-b850-a6a2a8de1773", 00:22:14.799 "is_configured": true, 00:22:14.800 "data_offset": 2048, 00:22:14.800 "data_size": 63488 00:22:14.800 } 00:22:14.800 ] 00:22:14.800 }' 00:22:14.800 21:44:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.800 21:44:35 -- common/autotest_common.sh@10 -- # set +x 00:22:15.365 21:44:35 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:15.365 21:44:35 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:22:15.365 [2024-12-06 21:44:35.810624] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:15.365 21:44:35 -- bdev/bdev_raid.sh@506 -- # '[' 5d751e0f-f0e3-4771-94a6-517adb948856 '!=' 5d751e0f-f0e3-4771-94a6-517adb948856 ']' 00:22:15.365 21:44:35 -- bdev/bdev_raid.sh@511 -- # killprocess 82798 00:22:15.365 21:44:35 -- common/autotest_common.sh@936 -- # '[' -z 82798 ']' 00:22:15.365 21:44:35 -- common/autotest_common.sh@940 -- # kill -0 82798 00:22:15.365 21:44:35 -- common/autotest_common.sh@941 -- # uname 00:22:15.365 21:44:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:15.365 21:44:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82798 00:22:15.365 killing process with pid 82798 00:22:15.365 21:44:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:15.365 21:44:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:15.365 21:44:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82798' 00:22:15.365 21:44:35 -- common/autotest_common.sh@955 -- # kill 82798 00:22:15.365 [2024-12-06 21:44:35.861391] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:15.365 21:44:35 -- common/autotest_common.sh@960 -- # wait 82798 00:22:15.365 [2024-12-06 21:44:35.861496] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.365 [2024-12-06 21:44:35.861578] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.365 [2024-12-06 21:44:35.861593] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000bd80 name raid_bdev1, state offline 00:22:15.623 [2024-12-06 21:44:36.057824] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:16.559 21:44:36 -- bdev/bdev_raid.sh@513 -- # return 0 00:22:16.559 00:22:16.559 real 0m14.981s 00:22:16.559 user 0m25.868s 00:22:16.559 sys 0m2.217s 00:22:16.559 21:44:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:16.559 21:44:36 -- common/autotest_common.sh@10 -- # set +x 00:22:16.559 ************************************ 00:22:16.559 END TEST raid5f_superblock_test 00:22:16.559 ************************************ 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:22:16.559 21:44:37 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:16.559 21:44:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:16.559 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:22:16.559 ************************************ 00:22:16.559 START TEST raid5f_rebuild_test 00:22:16.559 ************************************ 00:22:16.559 21:44:37 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 false false 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@544 -- # raid_pid=83335 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83335 /var/tmp/spdk-raid.sock 00:22:16.559 21:44:37 -- common/autotest_common.sh@829 -- # '[' -z 83335 ']' 00:22:16.559 21:44:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:16.559 21:44:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.559 21:44:37 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:16.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:16.559 21:44:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:16.559 21:44:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.559 21:44:37 -- common/autotest_common.sh@10 -- # set +x 00:22:16.818 [2024-12-06 21:44:37.087101] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:16.818 [2024-12-06 21:44:37.087269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83335 ] 00:22:16.818 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:16.818 Zero copy mechanism will not be used. 00:22:16.818 [2024-12-06 21:44:37.261100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.077 [2024-12-06 21:44:37.486999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.335 [2024-12-06 21:44:37.629272] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:17.593 21:44:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.593 21:44:37 -- common/autotest_common.sh@862 -- # return 0 00:22:17.593 21:44:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:17.594 21:44:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:17.594 21:44:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:17.852 BaseBdev1 00:22:17.852 21:44:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:17.852 21:44:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:17.852 21:44:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:18.111 BaseBdev2 00:22:18.111 21:44:38 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:18.111 21:44:38 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:18.111 21:44:38 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:18.111 BaseBdev3 00:22:18.370 21:44:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:18.370 spare_malloc 00:22:18.630 21:44:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:18.630 spare_delay 00:22:18.630 21:44:39 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:18.889 [2024-12-06 21:44:39.282991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:18.889 [2024-12-06 21:44:39.283046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:18.889 [2024-12-06 21:44:39.283067] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:18.889 [2024-12-06 21:44:39.283081] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:18.889 [2024-12-06 21:44:39.285264] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:18.889 [2024-12-06 21:44:39.285303] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:18.889 spare 00:22:18.889 21:44:39 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:19.148 [2024-12-06 21:44:39.471074] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:19.148 [2024-12-06 21:44:39.473105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:19.148 [2024-12-06 21:44:39.473175] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:19.148 [2024-12-06 21:44:39.473257] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008780 00:22:19.148 [2024-12-06 21:44:39.473270] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:19.148 [2024-12-06 21:44:39.473408] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:19.148 [2024-12-06 21:44:39.477768] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008780 00:22:19.148 [2024-12-06 21:44:39.477813] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008780 00:22:19.148 [2024-12-06 21:44:39.478023] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.148 21:44:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:19.407 21:44:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:19.407 "name": "raid_bdev1", 00:22:19.407 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:19.407 "strip_size_kb": 64, 00:22:19.407 "state": "online", 00:22:19.407 "raid_level": "raid5f", 00:22:19.407 "superblock": false, 00:22:19.407 "num_base_bdevs": 3, 00:22:19.407 "num_base_bdevs_discovered": 3, 00:22:19.407 "num_base_bdevs_operational": 3, 00:22:19.407 "base_bdevs_list": [ 00:22:19.407 { 00:22:19.407 "name": "BaseBdev1", 00:22:19.407 "uuid": "9753c7be-a138-4a0c-8132-76c31b957c09", 00:22:19.407 "is_configured": true, 00:22:19.407 "data_offset": 0, 00:22:19.407 "data_size": 65536 00:22:19.407 }, 00:22:19.407 { 00:22:19.407 "name": "BaseBdev2", 00:22:19.407 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:19.407 "is_configured": true, 00:22:19.407 "data_offset": 0, 00:22:19.407 "data_size": 65536 00:22:19.407 }, 00:22:19.407 { 00:22:19.407 "name": "BaseBdev3", 00:22:19.407 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:19.407 "is_configured": true, 00:22:19.407 "data_offset": 0, 00:22:19.407 "data_size": 65536 00:22:19.407 } 00:22:19.407 ] 00:22:19.407 }' 00:22:19.407 21:44:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:19.407 21:44:39 -- common/autotest_common.sh@10 -- # set +x 00:22:19.667 21:44:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:19.667 21:44:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:19.926 [2024-12-06 21:44:40.222819] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:19.926 21:44:40 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:22:19.926 21:44:40 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:19.926 21:44:40 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:20.185 21:44:40 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:20.185 21:44:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:20.185 21:44:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:20.185 21:44:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@12 -- # local i 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:20.186 21:44:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:20.445 [2024-12-06 21:44:40.727112] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:20.445 /dev/nbd0 00:22:20.445 21:44:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:20.445 21:44:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:20.445 21:44:40 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:20.445 21:44:40 -- common/autotest_common.sh@867 -- # local i 00:22:20.445 21:44:40 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:20.445 21:44:40 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:20.445 21:44:40 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:20.445 21:44:40 -- common/autotest_common.sh@871 -- # break 00:22:20.445 21:44:40 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:20.445 21:44:40 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:20.445 21:44:40 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:20.445 1+0 records in 00:22:20.445 1+0 records out 00:22:20.445 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248059 s, 16.5 MB/s 00:22:20.445 21:44:40 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.445 21:44:40 -- common/autotest_common.sh@884 -- # size=4096 00:22:20.445 21:44:40 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:20.445 21:44:40 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:20.445 21:44:40 -- common/autotest_common.sh@887 -- # return 0 00:22:20.445 21:44:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:20.445 21:44:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:20.445 21:44:40 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:20.445 21:44:40 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:20.445 21:44:40 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:20.445 21:44:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:22:20.704 512+0 records in 00:22:20.704 512+0 records out 00:22:20.704 67108864 bytes (67 MB, 64 MiB) copied, 0.340719 s, 197 MB/s 00:22:20.704 21:44:41 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@51 -- # local i 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:20.704 21:44:41 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:20.977 [2024-12-06 21:44:41.297640] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@41 -- # break 00:22:20.977 21:44:41 -- bdev/nbd_common.sh@45 -- # return 0 00:22:20.977 21:44:41 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:21.280 [2024-12-06 21:44:41.508201] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:21.280 21:44:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:21.538 21:44:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:21.538 "name": "raid_bdev1", 00:22:21.538 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:21.538 "strip_size_kb": 64, 00:22:21.538 "state": "online", 00:22:21.538 "raid_level": "raid5f", 00:22:21.538 "superblock": false, 00:22:21.538 "num_base_bdevs": 3, 00:22:21.538 "num_base_bdevs_discovered": 2, 00:22:21.538 "num_base_bdevs_operational": 2, 00:22:21.538 "base_bdevs_list": [ 00:22:21.538 { 00:22:21.538 "name": null, 00:22:21.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:21.538 "is_configured": false, 00:22:21.538 "data_offset": 0, 00:22:21.538 "data_size": 65536 00:22:21.538 }, 00:22:21.538 { 00:22:21.538 "name": "BaseBdev2", 00:22:21.538 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:21.538 "is_configured": true, 00:22:21.538 "data_offset": 0, 00:22:21.538 "data_size": 65536 00:22:21.538 }, 00:22:21.538 { 00:22:21.538 "name": "BaseBdev3", 00:22:21.538 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:21.538 "is_configured": true, 00:22:21.539 "data_offset": 0, 00:22:21.539 "data_size": 65536 00:22:21.539 } 00:22:21.539 ] 00:22:21.539 }' 00:22:21.539 21:44:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:21.539 21:44:41 -- common/autotest_common.sh@10 -- # set +x 00:22:21.539 21:44:42 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:21.798 [2024-12-06 21:44:42.260365] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:21.798 [2024-12-06 21:44:42.260428] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:21.798 [2024-12-06 21:44:42.271152] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002af30 00:22:21.798 [2024-12-06 21:44:42.276814] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:21.798 21:44:42 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:23.176 "name": "raid_bdev1", 00:22:23.176 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:23.176 "strip_size_kb": 64, 00:22:23.176 "state": "online", 00:22:23.176 "raid_level": "raid5f", 00:22:23.176 "superblock": false, 00:22:23.176 "num_base_bdevs": 3, 00:22:23.176 "num_base_bdevs_discovered": 3, 00:22:23.176 "num_base_bdevs_operational": 3, 00:22:23.176 "process": { 00:22:23.176 "type": "rebuild", 00:22:23.176 "target": "spare", 00:22:23.176 "progress": { 00:22:23.176 "blocks": 24576, 00:22:23.176 "percent": 18 00:22:23.176 } 00:22:23.176 }, 00:22:23.176 "base_bdevs_list": [ 00:22:23.176 { 00:22:23.176 "name": "spare", 00:22:23.176 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:23.176 "is_configured": true, 00:22:23.176 "data_offset": 0, 00:22:23.176 "data_size": 65536 00:22:23.176 }, 00:22:23.176 { 00:22:23.176 "name": "BaseBdev2", 00:22:23.176 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:23.176 "is_configured": true, 00:22:23.176 "data_offset": 0, 00:22:23.176 "data_size": 65536 00:22:23.176 }, 00:22:23.176 { 00:22:23.176 "name": "BaseBdev3", 00:22:23.176 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:23.176 "is_configured": true, 00:22:23.176 "data_offset": 0, 00:22:23.176 "data_size": 65536 00:22:23.176 } 00:22:23.176 ] 00:22:23.176 }' 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:23.176 21:44:43 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:23.435 [2024-12-06 21:44:43.773964] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:23.435 [2024-12-06 21:44:43.788422] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:23.436 [2024-12-06 21:44:43.788541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.436 21:44:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.695 21:44:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:23.695 "name": "raid_bdev1", 00:22:23.695 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:23.695 "strip_size_kb": 64, 00:22:23.695 "state": "online", 00:22:23.695 "raid_level": "raid5f", 00:22:23.695 "superblock": false, 00:22:23.695 "num_base_bdevs": 3, 00:22:23.695 "num_base_bdevs_discovered": 2, 00:22:23.695 "num_base_bdevs_operational": 2, 00:22:23.695 "base_bdevs_list": [ 00:22:23.695 { 00:22:23.695 "name": null, 00:22:23.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:23.695 "is_configured": false, 00:22:23.695 "data_offset": 0, 00:22:23.695 "data_size": 65536 00:22:23.695 }, 00:22:23.695 { 00:22:23.695 "name": "BaseBdev2", 00:22:23.695 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:23.695 "is_configured": true, 00:22:23.695 "data_offset": 0, 00:22:23.695 "data_size": 65536 00:22:23.695 }, 00:22:23.695 { 00:22:23.695 "name": "BaseBdev3", 00:22:23.695 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:23.695 "is_configured": true, 00:22:23.695 "data_offset": 0, 00:22:23.695 "data_size": 65536 00:22:23.695 } 00:22:23.695 ] 00:22:23.695 }' 00:22:23.695 21:44:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:23.695 21:44:44 -- common/autotest_common.sh@10 -- # set +x 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:23.954 21:44:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:24.213 "name": "raid_bdev1", 00:22:24.213 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:24.213 "strip_size_kb": 64, 00:22:24.213 "state": "online", 00:22:24.213 "raid_level": "raid5f", 00:22:24.213 "superblock": false, 00:22:24.213 "num_base_bdevs": 3, 00:22:24.213 "num_base_bdevs_discovered": 2, 00:22:24.213 "num_base_bdevs_operational": 2, 00:22:24.213 "base_bdevs_list": [ 00:22:24.213 { 00:22:24.213 "name": null, 00:22:24.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.213 "is_configured": false, 00:22:24.213 "data_offset": 0, 00:22:24.213 "data_size": 65536 00:22:24.213 }, 00:22:24.213 { 00:22:24.213 "name": "BaseBdev2", 00:22:24.213 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:24.213 "is_configured": true, 00:22:24.213 "data_offset": 0, 00:22:24.213 "data_size": 65536 00:22:24.213 }, 00:22:24.213 { 00:22:24.213 "name": "BaseBdev3", 00:22:24.213 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:24.213 "is_configured": true, 00:22:24.213 "data_offset": 0, 00:22:24.213 "data_size": 65536 00:22:24.213 } 00:22:24.213 ] 00:22:24.213 }' 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:24.213 21:44:44 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:24.472 [2024-12-06 21:44:44.748083] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:24.472 [2024-12-06 21:44:44.748142] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:24.472 [2024-12-06 21:44:44.757894] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:22:24.472 [2024-12-06 21:44:44.763406] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:24.472 21:44:44 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.418 21:44:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.676 21:44:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.676 "name": "raid_bdev1", 00:22:25.676 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:25.676 "strip_size_kb": 64, 00:22:25.676 "state": "online", 00:22:25.676 "raid_level": "raid5f", 00:22:25.676 "superblock": false, 00:22:25.676 "num_base_bdevs": 3, 00:22:25.676 "num_base_bdevs_discovered": 3, 00:22:25.676 "num_base_bdevs_operational": 3, 00:22:25.676 "process": { 00:22:25.676 "type": "rebuild", 00:22:25.676 "target": "spare", 00:22:25.676 "progress": { 00:22:25.676 "blocks": 24576, 00:22:25.676 "percent": 18 00:22:25.676 } 00:22:25.676 }, 00:22:25.676 "base_bdevs_list": [ 00:22:25.676 { 00:22:25.676 "name": "spare", 00:22:25.676 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:25.676 "is_configured": true, 00:22:25.676 "data_offset": 0, 00:22:25.676 "data_size": 65536 00:22:25.676 }, 00:22:25.676 { 00:22:25.676 "name": "BaseBdev2", 00:22:25.676 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:25.676 "is_configured": true, 00:22:25.676 "data_offset": 0, 00:22:25.676 "data_size": 65536 00:22:25.676 }, 00:22:25.676 { 00:22:25.676 "name": "BaseBdev3", 00:22:25.676 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:25.676 "is_configured": true, 00:22:25.676 "data_offset": 0, 00:22:25.676 "data_size": 65536 00:22:25.676 } 00:22:25.676 ] 00:22:25.676 }' 00:22:25.676 21:44:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@657 -- # local timeout=543 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:25.676 21:44:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:25.934 "name": "raid_bdev1", 00:22:25.934 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:25.934 "strip_size_kb": 64, 00:22:25.934 "state": "online", 00:22:25.934 "raid_level": "raid5f", 00:22:25.934 "superblock": false, 00:22:25.934 "num_base_bdevs": 3, 00:22:25.934 "num_base_bdevs_discovered": 3, 00:22:25.934 "num_base_bdevs_operational": 3, 00:22:25.934 "process": { 00:22:25.934 "type": "rebuild", 00:22:25.934 "target": "spare", 00:22:25.934 "progress": { 00:22:25.934 "blocks": 28672, 00:22:25.934 "percent": 21 00:22:25.934 } 00:22:25.934 }, 00:22:25.934 "base_bdevs_list": [ 00:22:25.934 { 00:22:25.934 "name": "spare", 00:22:25.934 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:25.934 "is_configured": true, 00:22:25.934 "data_offset": 0, 00:22:25.934 "data_size": 65536 00:22:25.934 }, 00:22:25.934 { 00:22:25.934 "name": "BaseBdev2", 00:22:25.934 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:25.934 "is_configured": true, 00:22:25.934 "data_offset": 0, 00:22:25.934 "data_size": 65536 00:22:25.934 }, 00:22:25.934 { 00:22:25.934 "name": "BaseBdev3", 00:22:25.934 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:25.934 "is_configured": true, 00:22:25.934 "data_offset": 0, 00:22:25.934 "data_size": 65536 00:22:25.934 } 00:22:25.934 ] 00:22:25.934 }' 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:25.934 21:44:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.868 21:44:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.126 21:44:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:27.126 "name": "raid_bdev1", 00:22:27.126 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:27.126 "strip_size_kb": 64, 00:22:27.126 "state": "online", 00:22:27.126 "raid_level": "raid5f", 00:22:27.126 "superblock": false, 00:22:27.127 "num_base_bdevs": 3, 00:22:27.127 "num_base_bdevs_discovered": 3, 00:22:27.127 "num_base_bdevs_operational": 3, 00:22:27.127 "process": { 00:22:27.127 "type": "rebuild", 00:22:27.127 "target": "spare", 00:22:27.127 "progress": { 00:22:27.127 "blocks": 55296, 00:22:27.127 "percent": 42 00:22:27.127 } 00:22:27.127 }, 00:22:27.127 "base_bdevs_list": [ 00:22:27.127 { 00:22:27.127 "name": "spare", 00:22:27.127 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:27.127 "is_configured": true, 00:22:27.127 "data_offset": 0, 00:22:27.127 "data_size": 65536 00:22:27.127 }, 00:22:27.127 { 00:22:27.127 "name": "BaseBdev2", 00:22:27.127 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:27.127 "is_configured": true, 00:22:27.127 "data_offset": 0, 00:22:27.127 "data_size": 65536 00:22:27.127 }, 00:22:27.127 { 00:22:27.127 "name": "BaseBdev3", 00:22:27.127 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:27.127 "is_configured": true, 00:22:27.127 "data_offset": 0, 00:22:27.127 "data_size": 65536 00:22:27.127 } 00:22:27.127 ] 00:22:27.127 }' 00:22:27.127 21:44:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:27.127 21:44:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:27.127 21:44:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:27.127 21:44:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:27.127 21:44:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.063 21:44:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.322 "name": "raid_bdev1", 00:22:28.322 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:28.322 "strip_size_kb": 64, 00:22:28.322 "state": "online", 00:22:28.322 "raid_level": "raid5f", 00:22:28.322 "superblock": false, 00:22:28.322 "num_base_bdevs": 3, 00:22:28.322 "num_base_bdevs_discovered": 3, 00:22:28.322 "num_base_bdevs_operational": 3, 00:22:28.322 "process": { 00:22:28.322 "type": "rebuild", 00:22:28.322 "target": "spare", 00:22:28.322 "progress": { 00:22:28.322 "blocks": 79872, 00:22:28.322 "percent": 60 00:22:28.322 } 00:22:28.322 }, 00:22:28.322 "base_bdevs_list": [ 00:22:28.322 { 00:22:28.322 "name": "spare", 00:22:28.322 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:28.322 "is_configured": true, 00:22:28.322 "data_offset": 0, 00:22:28.322 "data_size": 65536 00:22:28.322 }, 00:22:28.322 { 00:22:28.322 "name": "BaseBdev2", 00:22:28.322 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:28.322 "is_configured": true, 00:22:28.322 "data_offset": 0, 00:22:28.322 "data_size": 65536 00:22:28.322 }, 00:22:28.322 { 00:22:28.322 "name": "BaseBdev3", 00:22:28.322 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:28.322 "is_configured": true, 00:22:28.322 "data_offset": 0, 00:22:28.322 "data_size": 65536 00:22:28.322 } 00:22:28.322 ] 00:22:28.322 }' 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:28.322 21:44:48 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.700 21:44:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:29.700 "name": "raid_bdev1", 00:22:29.700 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:29.700 "strip_size_kb": 64, 00:22:29.700 "state": "online", 00:22:29.700 "raid_level": "raid5f", 00:22:29.700 "superblock": false, 00:22:29.700 "num_base_bdevs": 3, 00:22:29.700 "num_base_bdevs_discovered": 3, 00:22:29.700 "num_base_bdevs_operational": 3, 00:22:29.700 "process": { 00:22:29.700 "type": "rebuild", 00:22:29.700 "target": "spare", 00:22:29.700 "progress": { 00:22:29.700 "blocks": 106496, 00:22:29.700 "percent": 81 00:22:29.700 } 00:22:29.700 }, 00:22:29.700 "base_bdevs_list": [ 00:22:29.700 { 00:22:29.700 "name": "spare", 00:22:29.700 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:29.700 "is_configured": true, 00:22:29.700 "data_offset": 0, 00:22:29.700 "data_size": 65536 00:22:29.700 }, 00:22:29.700 { 00:22:29.700 "name": "BaseBdev2", 00:22:29.700 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:29.700 "is_configured": true, 00:22:29.700 "data_offset": 0, 00:22:29.700 "data_size": 65536 00:22:29.700 }, 00:22:29.700 { 00:22:29.700 "name": "BaseBdev3", 00:22:29.700 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:29.700 "is_configured": true, 00:22:29.700 "data_offset": 0, 00:22:29.700 "data_size": 65536 00:22:29.700 } 00:22:29.700 ] 00:22:29.700 }' 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:29.700 21:44:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:30.637 21:44:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.638 21:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.897 [2024-12-06 21:44:51.210073] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:30.897 [2024-12-06 21:44:51.210150] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:30.897 [2024-12-06 21:44:51.210217] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.897 "name": "raid_bdev1", 00:22:30.897 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:30.897 "strip_size_kb": 64, 00:22:30.897 "state": "online", 00:22:30.897 "raid_level": "raid5f", 00:22:30.897 "superblock": false, 00:22:30.897 "num_base_bdevs": 3, 00:22:30.897 "num_base_bdevs_discovered": 3, 00:22:30.897 "num_base_bdevs_operational": 3, 00:22:30.897 "base_bdevs_list": [ 00:22:30.897 { 00:22:30.897 "name": "spare", 00:22:30.897 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:30.897 "is_configured": true, 00:22:30.897 "data_offset": 0, 00:22:30.897 "data_size": 65536 00:22:30.897 }, 00:22:30.897 { 00:22:30.897 "name": "BaseBdev2", 00:22:30.897 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:30.897 "is_configured": true, 00:22:30.897 "data_offset": 0, 00:22:30.897 "data_size": 65536 00:22:30.897 }, 00:22:30.897 { 00:22:30.897 "name": "BaseBdev3", 00:22:30.897 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:30.897 "is_configured": true, 00:22:30.897 "data_offset": 0, 00:22:30.897 "data_size": 65536 00:22:30.897 } 00:22:30.897 ] 00:22:30.897 }' 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@660 -- # break 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.897 21:44:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.157 "name": "raid_bdev1", 00:22:31.157 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:31.157 "strip_size_kb": 64, 00:22:31.157 "state": "online", 00:22:31.157 "raid_level": "raid5f", 00:22:31.157 "superblock": false, 00:22:31.157 "num_base_bdevs": 3, 00:22:31.157 "num_base_bdevs_discovered": 3, 00:22:31.157 "num_base_bdevs_operational": 3, 00:22:31.157 "base_bdevs_list": [ 00:22:31.157 { 00:22:31.157 "name": "spare", 00:22:31.157 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:31.157 "is_configured": true, 00:22:31.157 "data_offset": 0, 00:22:31.157 "data_size": 65536 00:22:31.157 }, 00:22:31.157 { 00:22:31.157 "name": "BaseBdev2", 00:22:31.157 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:31.157 "is_configured": true, 00:22:31.157 "data_offset": 0, 00:22:31.157 "data_size": 65536 00:22:31.157 }, 00:22:31.157 { 00:22:31.157 "name": "BaseBdev3", 00:22:31.157 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:31.157 "is_configured": true, 00:22:31.157 "data_offset": 0, 00:22:31.157 "data_size": 65536 00:22:31.157 } 00:22:31.157 ] 00:22:31.157 }' 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:31.157 21:44:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.158 21:44:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.417 21:44:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:31.417 "name": "raid_bdev1", 00:22:31.417 "uuid": "1a1fce50-01f8-41ae-b1ad-1e73cf2f2d87", 00:22:31.417 "strip_size_kb": 64, 00:22:31.417 "state": "online", 00:22:31.417 "raid_level": "raid5f", 00:22:31.417 "superblock": false, 00:22:31.417 "num_base_bdevs": 3, 00:22:31.417 "num_base_bdevs_discovered": 3, 00:22:31.417 "num_base_bdevs_operational": 3, 00:22:31.417 "base_bdevs_list": [ 00:22:31.417 { 00:22:31.417 "name": "spare", 00:22:31.417 "uuid": "ade7fa5c-e0ef-536d-af4f-4b186e073f78", 00:22:31.417 "is_configured": true, 00:22:31.417 "data_offset": 0, 00:22:31.417 "data_size": 65536 00:22:31.417 }, 00:22:31.417 { 00:22:31.417 "name": "BaseBdev2", 00:22:31.417 "uuid": "fc7210dd-9ac0-41dc-a51d-a79c7396da72", 00:22:31.417 "is_configured": true, 00:22:31.417 "data_offset": 0, 00:22:31.417 "data_size": 65536 00:22:31.417 }, 00:22:31.417 { 00:22:31.417 "name": "BaseBdev3", 00:22:31.417 "uuid": "97b39a1d-a7dd-4012-a907-41bf3372d683", 00:22:31.417 "is_configured": true, 00:22:31.417 "data_offset": 0, 00:22:31.417 "data_size": 65536 00:22:31.417 } 00:22:31.417 ] 00:22:31.418 }' 00:22:31.418 21:44:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:31.418 21:44:51 -- common/autotest_common.sh@10 -- # set +x 00:22:31.677 21:44:52 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:31.937 [2024-12-06 21:44:52.297501] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:31.937 [2024-12-06 21:44:52.297531] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:31.937 [2024-12-06 21:44:52.297609] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:31.937 [2024-12-06 21:44:52.297684] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:31.937 [2024-12-06 21:44:52.297701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008780 name raid_bdev1, state offline 00:22:31.937 21:44:52 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:31.937 21:44:52 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:32.195 21:44:52 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:32.195 21:44:52 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:32.195 21:44:52 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@12 -- # local i 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.195 21:44:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:32.453 /dev/nbd0 00:22:32.453 21:44:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:32.453 21:44:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:32.453 21:44:52 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:32.453 21:44:52 -- common/autotest_common.sh@867 -- # local i 00:22:32.453 21:44:52 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:32.453 21:44:52 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:32.453 21:44:52 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:32.453 21:44:52 -- common/autotest_common.sh@871 -- # break 00:22:32.453 21:44:52 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:32.453 21:44:52 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:32.453 21:44:52 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.453 1+0 records in 00:22:32.453 1+0 records out 00:22:32.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363756 s, 11.3 MB/s 00:22:32.453 21:44:52 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.453 21:44:52 -- common/autotest_common.sh@884 -- # size=4096 00:22:32.453 21:44:52 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.453 21:44:52 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:32.453 21:44:52 -- common/autotest_common.sh@887 -- # return 0 00:22:32.453 21:44:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.453 21:44:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.453 21:44:52 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:32.711 /dev/nbd1 00:22:32.711 21:44:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:32.711 21:44:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:32.711 21:44:53 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:32.711 21:44:53 -- common/autotest_common.sh@867 -- # local i 00:22:32.711 21:44:53 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:32.711 21:44:53 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:32.711 21:44:53 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:32.711 21:44:53 -- common/autotest_common.sh@871 -- # break 00:22:32.711 21:44:53 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:32.711 21:44:53 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:32.711 21:44:53 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:32.711 1+0 records in 00:22:32.711 1+0 records out 00:22:32.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344608 s, 11.9 MB/s 00:22:32.712 21:44:53 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.712 21:44:53 -- common/autotest_common.sh@884 -- # size=4096 00:22:32.712 21:44:53 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:32.712 21:44:53 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:32.712 21:44:53 -- common/autotest_common.sh@887 -- # return 0 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:32.712 21:44:53 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:32.712 21:44:53 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@51 -- # local i 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:32.712 21:44:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:32.978 21:44:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@41 -- # break 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:33.236 21:44:53 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@41 -- # break 00:22:33.494 21:44:53 -- bdev/nbd_common.sh@45 -- # return 0 00:22:33.494 21:44:53 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:33.494 21:44:53 -- bdev/bdev_raid.sh@709 -- # killprocess 83335 00:22:33.494 21:44:53 -- common/autotest_common.sh@936 -- # '[' -z 83335 ']' 00:22:33.494 21:44:53 -- common/autotest_common.sh@940 -- # kill -0 83335 00:22:33.494 21:44:53 -- common/autotest_common.sh@941 -- # uname 00:22:33.494 21:44:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:33.494 21:44:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83335 00:22:33.494 killing process with pid 83335 00:22:33.494 Received shutdown signal, test time was about 60.000000 seconds 00:22:33.494 00:22:33.494 Latency(us) 00:22:33.494 [2024-12-06T21:44:53.991Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.494 [2024-12-06T21:44:53.991Z] =================================================================================================================== 00:22:33.494 [2024-12-06T21:44:53.991Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:33.494 21:44:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:33.494 21:44:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:33.494 21:44:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83335' 00:22:33.494 21:44:53 -- common/autotest_common.sh@955 -- # kill 83335 00:22:33.494 21:44:53 -- common/autotest_common.sh@960 -- # wait 83335 00:22:33.494 [2024-12-06 21:44:53.775851] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:33.752 [2024-12-06 21:44:54.030578] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:34.688 00:22:34.688 real 0m17.920s 00:22:34.688 user 0m25.244s 00:22:34.688 sys 0m2.249s 00:22:34.688 21:44:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:34.688 21:44:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 ************************************ 00:22:34.688 END TEST raid5f_rebuild_test 00:22:34.688 ************************************ 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:22:34.688 21:44:54 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:22:34.688 21:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:34.688 21:44:54 -- common/autotest_common.sh@10 -- # set +x 00:22:34.688 ************************************ 00:22:34.688 START TEST raid5f_rebuild_test_sb 00:22:34.688 ************************************ 00:22:34.688 21:44:54 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 3 true false 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:34.688 21:44:54 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:22:34.688 21:44:55 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:22:34.689 21:44:55 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:34.689 21:44:55 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:34.689 21:44:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=83818 00:22:34.689 21:44:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 83818 /var/tmp/spdk-raid.sock 00:22:34.689 21:44:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:34.689 21:44:55 -- common/autotest_common.sh@829 -- # '[' -z 83818 ']' 00:22:34.689 21:44:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:34.689 21:44:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:34.689 21:44:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:34.689 21:44:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.689 21:44:55 -- common/autotest_common.sh@10 -- # set +x 00:22:34.689 [2024-12-06 21:44:55.061064] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:34.689 [2024-12-06 21:44:55.061232] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:22:34.689 Zero copy mechanism will not be used. 00:22:34.689 :6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83818 ] 00:22:34.948 [2024-12-06 21:44:55.228105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.948 [2024-12-06 21:44:55.383318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.211 [2024-12-06 21:44:55.530657] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:35.777 21:44:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:35.777 21:44:55 -- common/autotest_common.sh@862 -- # return 0 00:22:35.777 21:44:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:35.777 21:44:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:35.777 21:44:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:35.777 BaseBdev1_malloc 00:22:35.777 21:44:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:36.036 [2024-12-06 21:44:56.320754] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:36.036 [2024-12-06 21:44:56.320870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.036 [2024-12-06 21:44:56.320902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:22:36.036 [2024-12-06 21:44:56.320918] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.036 [2024-12-06 21:44:56.323094] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.036 [2024-12-06 21:44:56.323137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:36.036 BaseBdev1 00:22:36.036 21:44:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:36.036 21:44:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:36.036 21:44:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:36.296 BaseBdev2_malloc 00:22:36.296 21:44:56 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:36.296 [2024-12-06 21:44:56.782842] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:36.296 [2024-12-06 21:44:56.782922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.296 [2024-12-06 21:44:56.782960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:22:36.296 [2024-12-06 21:44:56.782979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.296 [2024-12-06 21:44:56.785276] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.296 [2024-12-06 21:44:56.785350] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:36.296 BaseBdev2 00:22:36.556 21:44:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:36.556 21:44:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:36.556 21:44:56 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:36.556 BaseBdev3_malloc 00:22:36.816 21:44:57 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:36.816 [2024-12-06 21:44:57.231750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:36.816 [2024-12-06 21:44:57.231826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:36.816 [2024-12-06 21:44:57.231853] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:22:36.816 [2024-12-06 21:44:57.231869] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:36.816 [2024-12-06 21:44:57.234051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:36.816 [2024-12-06 21:44:57.234093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:36.816 BaseBdev3 00:22:36.816 21:44:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:37.074 spare_malloc 00:22:37.074 21:44:57 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:37.333 spare_delay 00:22:37.333 21:44:57 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:37.333 [2024-12-06 21:44:57.808110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:37.333 [2024-12-06 21:44:57.808232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.333 [2024-12-06 21:44:57.808261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009380 00:22:37.333 [2024-12-06 21:44:57.808277] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.333 [2024-12-06 21:44:57.810572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.333 [2024-12-06 21:44:57.810614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:37.333 spare 00:22:37.333 21:44:57 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:22:37.593 [2024-12-06 21:44:58.076235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.593 [2024-12-06 21:44:58.078054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:37.593 [2024-12-06 21:44:58.078131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:37.593 [2024-12-06 21:44:58.078387] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009980 00:22:37.593 [2024-12-06 21:44:58.078411] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:37.593 [2024-12-06 21:44:58.078585] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:22:37.593 [2024-12-06 21:44:58.082969] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009980 00:22:37.593 [2024-12-06 21:44:58.083016] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009980 00:22:37.593 [2024-12-06 21:44:58.083240] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:37.853 "name": "raid_bdev1", 00:22:37.853 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:37.853 "strip_size_kb": 64, 00:22:37.853 "state": "online", 00:22:37.853 "raid_level": "raid5f", 00:22:37.853 "superblock": true, 00:22:37.853 "num_base_bdevs": 3, 00:22:37.853 "num_base_bdevs_discovered": 3, 00:22:37.853 "num_base_bdevs_operational": 3, 00:22:37.853 "base_bdevs_list": [ 00:22:37.853 { 00:22:37.853 "name": "BaseBdev1", 00:22:37.853 "uuid": "bba9e4d7-637f-5e8d-9d62-963d86a29e2d", 00:22:37.853 "is_configured": true, 00:22:37.853 "data_offset": 2048, 00:22:37.853 "data_size": 63488 00:22:37.853 }, 00:22:37.853 { 00:22:37.853 "name": "BaseBdev2", 00:22:37.853 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:37.853 "is_configured": true, 00:22:37.853 "data_offset": 2048, 00:22:37.853 "data_size": 63488 00:22:37.853 }, 00:22:37.853 { 00:22:37.853 "name": "BaseBdev3", 00:22:37.853 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:37.853 "is_configured": true, 00:22:37.853 "data_offset": 2048, 00:22:37.853 "data_size": 63488 00:22:37.853 } 00:22:37.853 ] 00:22:37.853 }' 00:22:37.853 21:44:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:37.853 21:44:58 -- common/autotest_common.sh@10 -- # set +x 00:22:38.112 21:44:58 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:38.112 21:44:58 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:38.371 [2024-12-06 21:44:58.827996] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:38.371 21:44:58 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:22:38.371 21:44:58 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:38.371 21:44:58 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:38.630 21:44:59 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:38.630 21:44:59 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:22:38.630 21:44:59 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:22:38.630 21:44:59 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@12 -- # local i 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.630 21:44:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:22:38.890 [2024-12-06 21:44:59.268066] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:22:38.890 /dev/nbd0 00:22:38.890 21:44:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:38.890 21:44:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:38.890 21:44:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:38.890 21:44:59 -- common/autotest_common.sh@867 -- # local i 00:22:38.890 21:44:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:38.890 21:44:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:38.890 21:44:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:38.890 21:44:59 -- common/autotest_common.sh@871 -- # break 00:22:38.890 21:44:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:38.890 21:44:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:38.890 21:44:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:38.890 1+0 records in 00:22:38.890 1+0 records out 00:22:38.890 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285082 s, 14.4 MB/s 00:22:38.890 21:44:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.890 21:44:59 -- common/autotest_common.sh@884 -- # size=4096 00:22:38.891 21:44:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:38.891 21:44:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:38.891 21:44:59 -- common/autotest_common.sh@887 -- # return 0 00:22:38.891 21:44:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:38.891 21:44:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:38.891 21:44:59 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:22:38.891 21:44:59 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:22:38.891 21:44:59 -- bdev/bdev_raid.sh@582 -- # echo 128 00:22:38.891 21:44:59 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:22:39.461 496+0 records in 00:22:39.461 496+0 records out 00:22:39.461 65011712 bytes (65 MB, 62 MiB) copied, 0.335904 s, 194 MB/s 00:22:39.461 21:44:59 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@51 -- # local i 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:39.461 [2024-12-06 21:44:59.889574] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@41 -- # break 00:22:39.461 21:44:59 -- bdev/nbd_common.sh@45 -- # return 0 00:22:39.461 21:44:59 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:39.720 [2024-12-06 21:45:00.087162] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.720 21:45:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.979 21:45:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.979 "name": "raid_bdev1", 00:22:39.979 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:39.979 "strip_size_kb": 64, 00:22:39.979 "state": "online", 00:22:39.979 "raid_level": "raid5f", 00:22:39.979 "superblock": true, 00:22:39.979 "num_base_bdevs": 3, 00:22:39.979 "num_base_bdevs_discovered": 2, 00:22:39.979 "num_base_bdevs_operational": 2, 00:22:39.979 "base_bdevs_list": [ 00:22:39.979 { 00:22:39.979 "name": null, 00:22:39.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.979 "is_configured": false, 00:22:39.979 "data_offset": 2048, 00:22:39.979 "data_size": 63488 00:22:39.979 }, 00:22:39.979 { 00:22:39.979 "name": "BaseBdev2", 00:22:39.979 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:39.979 "is_configured": true, 00:22:39.979 "data_offset": 2048, 00:22:39.979 "data_size": 63488 00:22:39.979 }, 00:22:39.979 { 00:22:39.979 "name": "BaseBdev3", 00:22:39.979 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:39.979 "is_configured": true, 00:22:39.979 "data_offset": 2048, 00:22:39.980 "data_size": 63488 00:22:39.980 } 00:22:39.980 ] 00:22:39.980 }' 00:22:39.980 21:45:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.980 21:45:00 -- common/autotest_common.sh@10 -- # set +x 00:22:40.239 21:45:00 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:40.499 [2024-12-06 21:45:00.791319] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:40.499 [2024-12-06 21:45:00.791382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:40.499 [2024-12-06 21:45:00.803527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028830 00:22:40.499 [2024-12-06 21:45:00.809652] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:40.499 21:45:00 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.438 21:45:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:41.698 "name": "raid_bdev1", 00:22:41.698 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:41.698 "strip_size_kb": 64, 00:22:41.698 "state": "online", 00:22:41.698 "raid_level": "raid5f", 00:22:41.698 "superblock": true, 00:22:41.698 "num_base_bdevs": 3, 00:22:41.698 "num_base_bdevs_discovered": 3, 00:22:41.698 "num_base_bdevs_operational": 3, 00:22:41.698 "process": { 00:22:41.698 "type": "rebuild", 00:22:41.698 "target": "spare", 00:22:41.698 "progress": { 00:22:41.698 "blocks": 24576, 00:22:41.698 "percent": 19 00:22:41.698 } 00:22:41.698 }, 00:22:41.698 "base_bdevs_list": [ 00:22:41.698 { 00:22:41.698 "name": "spare", 00:22:41.698 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:41.698 "is_configured": true, 00:22:41.698 "data_offset": 2048, 00:22:41.698 "data_size": 63488 00:22:41.698 }, 00:22:41.698 { 00:22:41.698 "name": "BaseBdev2", 00:22:41.698 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:41.698 "is_configured": true, 00:22:41.698 "data_offset": 2048, 00:22:41.698 "data_size": 63488 00:22:41.698 }, 00:22:41.698 { 00:22:41.698 "name": "BaseBdev3", 00:22:41.698 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:41.698 "is_configured": true, 00:22:41.698 "data_offset": 2048, 00:22:41.698 "data_size": 63488 00:22:41.698 } 00:22:41.698 ] 00:22:41.698 }' 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:41.698 21:45:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:41.958 [2024-12-06 21:45:02.303612] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:41.958 [2024-12-06 21:45:02.323808] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:41.958 [2024-12-06 21:45:02.323907] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:41.958 21:45:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.217 21:45:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:42.217 "name": "raid_bdev1", 00:22:42.217 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:42.217 "strip_size_kb": 64, 00:22:42.217 "state": "online", 00:22:42.217 "raid_level": "raid5f", 00:22:42.217 "superblock": true, 00:22:42.217 "num_base_bdevs": 3, 00:22:42.217 "num_base_bdevs_discovered": 2, 00:22:42.217 "num_base_bdevs_operational": 2, 00:22:42.217 "base_bdevs_list": [ 00:22:42.217 { 00:22:42.217 "name": null, 00:22:42.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.217 "is_configured": false, 00:22:42.217 "data_offset": 2048, 00:22:42.217 "data_size": 63488 00:22:42.217 }, 00:22:42.217 { 00:22:42.217 "name": "BaseBdev2", 00:22:42.217 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:42.217 "is_configured": true, 00:22:42.217 "data_offset": 2048, 00:22:42.217 "data_size": 63488 00:22:42.217 }, 00:22:42.217 { 00:22:42.217 "name": "BaseBdev3", 00:22:42.217 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:42.217 "is_configured": true, 00:22:42.217 "data_offset": 2048, 00:22:42.217 "data_size": 63488 00:22:42.217 } 00:22:42.217 ] 00:22:42.217 }' 00:22:42.217 21:45:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:42.217 21:45:02 -- common/autotest_common.sh@10 -- # set +x 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:42.476 21:45:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:42.736 "name": "raid_bdev1", 00:22:42.736 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:42.736 "strip_size_kb": 64, 00:22:42.736 "state": "online", 00:22:42.736 "raid_level": "raid5f", 00:22:42.736 "superblock": true, 00:22:42.736 "num_base_bdevs": 3, 00:22:42.736 "num_base_bdevs_discovered": 2, 00:22:42.736 "num_base_bdevs_operational": 2, 00:22:42.736 "base_bdevs_list": [ 00:22:42.736 { 00:22:42.736 "name": null, 00:22:42.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:42.736 "is_configured": false, 00:22:42.736 "data_offset": 2048, 00:22:42.736 "data_size": 63488 00:22:42.736 }, 00:22:42.736 { 00:22:42.736 "name": "BaseBdev2", 00:22:42.736 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:42.736 "is_configured": true, 00:22:42.736 "data_offset": 2048, 00:22:42.736 "data_size": 63488 00:22:42.736 }, 00:22:42.736 { 00:22:42.736 "name": "BaseBdev3", 00:22:42.736 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:42.736 "is_configured": true, 00:22:42.736 "data_offset": 2048, 00:22:42.736 "data_size": 63488 00:22:42.736 } 00:22:42.736 ] 00:22:42.736 }' 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:42.736 21:45:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:42.994 [2024-12-06 21:45:03.443369] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:42.994 [2024-12-06 21:45:03.443432] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:42.994 [2024-12-06 21:45:03.453797] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000028900 00:22:42.994 [2024-12-06 21:45:03.459738] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:42.994 21:45:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:44.372 21:45:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.372 21:45:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:44.372 21:45:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:44.372 21:45:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:44.372 21:45:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.373 "name": "raid_bdev1", 00:22:44.373 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:44.373 "strip_size_kb": 64, 00:22:44.373 "state": "online", 00:22:44.373 "raid_level": "raid5f", 00:22:44.373 "superblock": true, 00:22:44.373 "num_base_bdevs": 3, 00:22:44.373 "num_base_bdevs_discovered": 3, 00:22:44.373 "num_base_bdevs_operational": 3, 00:22:44.373 "process": { 00:22:44.373 "type": "rebuild", 00:22:44.373 "target": "spare", 00:22:44.373 "progress": { 00:22:44.373 "blocks": 22528, 00:22:44.373 "percent": 17 00:22:44.373 } 00:22:44.373 }, 00:22:44.373 "base_bdevs_list": [ 00:22:44.373 { 00:22:44.373 "name": "spare", 00:22:44.373 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:44.373 "is_configured": true, 00:22:44.373 "data_offset": 2048, 00:22:44.373 "data_size": 63488 00:22:44.373 }, 00:22:44.373 { 00:22:44.373 "name": "BaseBdev2", 00:22:44.373 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:44.373 "is_configured": true, 00:22:44.373 "data_offset": 2048, 00:22:44.373 "data_size": 63488 00:22:44.373 }, 00:22:44.373 { 00:22:44.373 "name": "BaseBdev3", 00:22:44.373 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:44.373 "is_configured": true, 00:22:44.373 "data_offset": 2048, 00:22:44.373 "data_size": 63488 00:22:44.373 } 00:22:44.373 ] 00:22:44.373 }' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:44.373 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@657 -- # local timeout=561 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:44.373 21:45:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:44.632 "name": "raid_bdev1", 00:22:44.632 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:44.632 "strip_size_kb": 64, 00:22:44.632 "state": "online", 00:22:44.632 "raid_level": "raid5f", 00:22:44.632 "superblock": true, 00:22:44.632 "num_base_bdevs": 3, 00:22:44.632 "num_base_bdevs_discovered": 3, 00:22:44.632 "num_base_bdevs_operational": 3, 00:22:44.632 "process": { 00:22:44.632 "type": "rebuild", 00:22:44.632 "target": "spare", 00:22:44.632 "progress": { 00:22:44.632 "blocks": 28672, 00:22:44.632 "percent": 22 00:22:44.632 } 00:22:44.632 }, 00:22:44.632 "base_bdevs_list": [ 00:22:44.632 { 00:22:44.632 "name": "spare", 00:22:44.632 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:44.632 "is_configured": true, 00:22:44.632 "data_offset": 2048, 00:22:44.632 "data_size": 63488 00:22:44.632 }, 00:22:44.632 { 00:22:44.632 "name": "BaseBdev2", 00:22:44.632 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:44.632 "is_configured": true, 00:22:44.632 "data_offset": 2048, 00:22:44.632 "data_size": 63488 00:22:44.632 }, 00:22:44.632 { 00:22:44.632 "name": "BaseBdev3", 00:22:44.632 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:44.632 "is_configured": true, 00:22:44.632 "data_offset": 2048, 00:22:44.632 "data_size": 63488 00:22:44.632 } 00:22:44.632 ] 00:22:44.632 }' 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:44.632 21:45:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.567 21:45:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:45.826 "name": "raid_bdev1", 00:22:45.826 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:45.826 "strip_size_kb": 64, 00:22:45.826 "state": "online", 00:22:45.826 "raid_level": "raid5f", 00:22:45.826 "superblock": true, 00:22:45.826 "num_base_bdevs": 3, 00:22:45.826 "num_base_bdevs_discovered": 3, 00:22:45.826 "num_base_bdevs_operational": 3, 00:22:45.826 "process": { 00:22:45.826 "type": "rebuild", 00:22:45.826 "target": "spare", 00:22:45.826 "progress": { 00:22:45.826 "blocks": 53248, 00:22:45.826 "percent": 41 00:22:45.826 } 00:22:45.826 }, 00:22:45.826 "base_bdevs_list": [ 00:22:45.826 { 00:22:45.826 "name": "spare", 00:22:45.826 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:45.826 "is_configured": true, 00:22:45.826 "data_offset": 2048, 00:22:45.826 "data_size": 63488 00:22:45.826 }, 00:22:45.826 { 00:22:45.826 "name": "BaseBdev2", 00:22:45.826 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:45.826 "is_configured": true, 00:22:45.826 "data_offset": 2048, 00:22:45.826 "data_size": 63488 00:22:45.826 }, 00:22:45.826 { 00:22:45.826 "name": "BaseBdev3", 00:22:45.826 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:45.826 "is_configured": true, 00:22:45.826 "data_offset": 2048, 00:22:45.826 "data_size": 63488 00:22:45.826 } 00:22:45.826 ] 00:22:45.826 }' 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:45.826 21:45:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.763 21:45:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:47.022 21:45:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:47.022 "name": "raid_bdev1", 00:22:47.022 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:47.022 "strip_size_kb": 64, 00:22:47.022 "state": "online", 00:22:47.022 "raid_level": "raid5f", 00:22:47.022 "superblock": true, 00:22:47.022 "num_base_bdevs": 3, 00:22:47.022 "num_base_bdevs_discovered": 3, 00:22:47.022 "num_base_bdevs_operational": 3, 00:22:47.022 "process": { 00:22:47.022 "type": "rebuild", 00:22:47.022 "target": "spare", 00:22:47.022 "progress": { 00:22:47.022 "blocks": 79872, 00:22:47.022 "percent": 62 00:22:47.022 } 00:22:47.022 }, 00:22:47.022 "base_bdevs_list": [ 00:22:47.022 { 00:22:47.022 "name": "spare", 00:22:47.022 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:47.022 "is_configured": true, 00:22:47.022 "data_offset": 2048, 00:22:47.022 "data_size": 63488 00:22:47.022 }, 00:22:47.022 { 00:22:47.022 "name": "BaseBdev2", 00:22:47.022 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:47.022 "is_configured": true, 00:22:47.022 "data_offset": 2048, 00:22:47.022 "data_size": 63488 00:22:47.022 }, 00:22:47.022 { 00:22:47.022 "name": "BaseBdev3", 00:22:47.022 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:47.022 "is_configured": true, 00:22:47.022 "data_offset": 2048, 00:22:47.022 "data_size": 63488 00:22:47.022 } 00:22:47.022 ] 00:22:47.023 }' 00:22:47.023 21:45:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:47.023 21:45:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:47.023 21:45:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:47.023 21:45:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:47.023 21:45:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:48.401 "name": "raid_bdev1", 00:22:48.401 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:48.401 "strip_size_kb": 64, 00:22:48.401 "state": "online", 00:22:48.401 "raid_level": "raid5f", 00:22:48.401 "superblock": true, 00:22:48.401 "num_base_bdevs": 3, 00:22:48.401 "num_base_bdevs_discovered": 3, 00:22:48.401 "num_base_bdevs_operational": 3, 00:22:48.401 "process": { 00:22:48.401 "type": "rebuild", 00:22:48.401 "target": "spare", 00:22:48.401 "progress": { 00:22:48.401 "blocks": 104448, 00:22:48.401 "percent": 82 00:22:48.401 } 00:22:48.401 }, 00:22:48.401 "base_bdevs_list": [ 00:22:48.401 { 00:22:48.401 "name": "spare", 00:22:48.401 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:48.401 "is_configured": true, 00:22:48.401 "data_offset": 2048, 00:22:48.401 "data_size": 63488 00:22:48.401 }, 00:22:48.401 { 00:22:48.401 "name": "BaseBdev2", 00:22:48.401 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:48.401 "is_configured": true, 00:22:48.401 "data_offset": 2048, 00:22:48.401 "data_size": 63488 00:22:48.401 }, 00:22:48.401 { 00:22:48.401 "name": "BaseBdev3", 00:22:48.401 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:48.401 "is_configured": true, 00:22:48.401 "data_offset": 2048, 00:22:48.401 "data_size": 63488 00:22:48.401 } 00:22:48.401 ] 00:22:48.401 }' 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:48.401 21:45:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:49.337 [2024-12-06 21:45:09.708894] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:49.337 [2024-12-06 21:45:09.708986] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:49.337 [2024-12-06 21:45:09.709112] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.337 21:45:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.338 21:45:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.597 "name": "raid_bdev1", 00:22:49.597 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:49.597 "strip_size_kb": 64, 00:22:49.597 "state": "online", 00:22:49.597 "raid_level": "raid5f", 00:22:49.597 "superblock": true, 00:22:49.597 "num_base_bdevs": 3, 00:22:49.597 "num_base_bdevs_discovered": 3, 00:22:49.597 "num_base_bdevs_operational": 3, 00:22:49.597 "base_bdevs_list": [ 00:22:49.597 { 00:22:49.597 "name": "spare", 00:22:49.597 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:49.597 "is_configured": true, 00:22:49.597 "data_offset": 2048, 00:22:49.597 "data_size": 63488 00:22:49.597 }, 00:22:49.597 { 00:22:49.597 "name": "BaseBdev2", 00:22:49.597 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:49.597 "is_configured": true, 00:22:49.597 "data_offset": 2048, 00:22:49.597 "data_size": 63488 00:22:49.597 }, 00:22:49.597 { 00:22:49.597 "name": "BaseBdev3", 00:22:49.597 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:49.597 "is_configured": true, 00:22:49.597 "data_offset": 2048, 00:22:49.597 "data_size": 63488 00:22:49.597 } 00:22:49.597 ] 00:22:49.597 }' 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@660 -- # break 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.597 21:45:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:49.856 "name": "raid_bdev1", 00:22:49.856 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:49.856 "strip_size_kb": 64, 00:22:49.856 "state": "online", 00:22:49.856 "raid_level": "raid5f", 00:22:49.856 "superblock": true, 00:22:49.856 "num_base_bdevs": 3, 00:22:49.856 "num_base_bdevs_discovered": 3, 00:22:49.856 "num_base_bdevs_operational": 3, 00:22:49.856 "base_bdevs_list": [ 00:22:49.856 { 00:22:49.856 "name": "spare", 00:22:49.856 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:49.856 "is_configured": true, 00:22:49.856 "data_offset": 2048, 00:22:49.856 "data_size": 63488 00:22:49.856 }, 00:22:49.856 { 00:22:49.856 "name": "BaseBdev2", 00:22:49.856 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:49.856 "is_configured": true, 00:22:49.856 "data_offset": 2048, 00:22:49.856 "data_size": 63488 00:22:49.856 }, 00:22:49.856 { 00:22:49.856 "name": "BaseBdev3", 00:22:49.856 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:49.856 "is_configured": true, 00:22:49.856 "data_offset": 2048, 00:22:49.856 "data_size": 63488 00:22:49.856 } 00:22:49.856 ] 00:22:49.856 }' 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.856 21:45:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:50.115 21:45:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:50.115 "name": "raid_bdev1", 00:22:50.115 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:50.115 "strip_size_kb": 64, 00:22:50.115 "state": "online", 00:22:50.115 "raid_level": "raid5f", 00:22:50.115 "superblock": true, 00:22:50.115 "num_base_bdevs": 3, 00:22:50.115 "num_base_bdevs_discovered": 3, 00:22:50.115 "num_base_bdevs_operational": 3, 00:22:50.115 "base_bdevs_list": [ 00:22:50.115 { 00:22:50.115 "name": "spare", 00:22:50.115 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:50.115 "is_configured": true, 00:22:50.115 "data_offset": 2048, 00:22:50.115 "data_size": 63488 00:22:50.115 }, 00:22:50.115 { 00:22:50.115 "name": "BaseBdev2", 00:22:50.115 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:50.115 "is_configured": true, 00:22:50.115 "data_offset": 2048, 00:22:50.115 "data_size": 63488 00:22:50.115 }, 00:22:50.115 { 00:22:50.115 "name": "BaseBdev3", 00:22:50.115 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:50.115 "is_configured": true, 00:22:50.115 "data_offset": 2048, 00:22:50.115 "data_size": 63488 00:22:50.115 } 00:22:50.115 ] 00:22:50.115 }' 00:22:50.115 21:45:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:50.115 21:45:10 -- common/autotest_common.sh@10 -- # set +x 00:22:50.378 21:45:10 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:50.697 [2024-12-06 21:45:10.897365] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:50.697 [2024-12-06 21:45:10.897400] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:50.697 [2024-12-06 21:45:10.897528] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:50.697 [2024-12-06 21:45:10.897614] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:50.697 [2024-12-06 21:45:10.897642] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state offline 00:22:50.697 21:45:10 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.697 21:45:10 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:50.988 21:45:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:50.988 21:45:11 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:22:50.988 21:45:11 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@12 -- # local i 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:22:50.988 /dev/nbd0 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:50.988 21:45:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:50.988 21:45:11 -- common/autotest_common.sh@867 -- # local i 00:22:50.988 21:45:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:50.988 21:45:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:50.988 21:45:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:50.988 21:45:11 -- common/autotest_common.sh@871 -- # break 00:22:50.988 21:45:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:50.988 21:45:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:50.988 21:45:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:50.988 1+0 records in 00:22:50.988 1+0 records out 00:22:50.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258894 s, 15.8 MB/s 00:22:50.988 21:45:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.988 21:45:11 -- common/autotest_common.sh@884 -- # size=4096 00:22:50.988 21:45:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:50.988 21:45:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:50.988 21:45:11 -- common/autotest_common.sh@887 -- # return 0 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:50.988 21:45:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:22:51.246 /dev/nbd1 00:22:51.246 21:45:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:51.246 21:45:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:51.246 21:45:11 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:22:51.246 21:45:11 -- common/autotest_common.sh@867 -- # local i 00:22:51.246 21:45:11 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:51.246 21:45:11 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:51.246 21:45:11 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:22:51.246 21:45:11 -- common/autotest_common.sh@871 -- # break 00:22:51.246 21:45:11 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:51.246 21:45:11 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:51.246 21:45:11 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:51.246 1+0 records in 00:22:51.246 1+0 records out 00:22:51.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280452 s, 14.6 MB/s 00:22:51.246 21:45:11 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:51.246 21:45:11 -- common/autotest_common.sh@884 -- # size=4096 00:22:51.246 21:45:11 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:51.246 21:45:11 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:51.246 21:45:11 -- common/autotest_common.sh@887 -- # return 0 00:22:51.246 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:51.246 21:45:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:51.246 21:45:11 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:51.504 21:45:11 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:22:51.504 21:45:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:51.505 21:45:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:51.505 21:45:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:51.505 21:45:11 -- bdev/nbd_common.sh@51 -- # local i 00:22:51.505 21:45:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:51.505 21:45:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@41 -- # break 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@45 -- # return 0 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:51.763 21:45:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@41 -- # break 00:22:52.022 21:45:12 -- bdev/nbd_common.sh@45 -- # return 0 00:22:52.022 21:45:12 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:52.022 21:45:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:52.022 21:45:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:52.022 21:45:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:52.282 21:45:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:52.282 [2024-12-06 21:45:12.768912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:52.282 [2024-12-06 21:45:12.768988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.282 [2024-12-06 21:45:12.769017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:22:52.282 [2024-12-06 21:45:12.769031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.282 [2024-12-06 21:45:12.771272] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.282 [2024-12-06 21:45:12.771316] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:52.282 [2024-12-06 21:45:12.771405] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:52.282 [2024-12-06 21:45:12.771488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:52.282 BaseBdev1 00:22:52.541 21:45:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:52.541 21:45:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:22:52.541 21:45:12 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:22:52.541 21:45:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:52.801 [2024-12-06 21:45:13.140992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:52.801 [2024-12-06 21:45:13.141203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:52.801 [2024-12-06 21:45:13.141273] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:22:52.801 [2024-12-06 21:45:13.141395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:52.801 [2024-12-06 21:45:13.141891] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:52.801 [2024-12-06 21:45:13.142074] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:52.801 [2024-12-06 21:45:13.142276] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:22:52.801 [2024-12-06 21:45:13.142439] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:22:52.801 [2024-12-06 21:45:13.142592] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:52.801 [2024-12-06 21:45:13.142730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ae80 name raid_bdev1, state configuring 00:22:52.801 [2024-12-06 21:45:13.142940] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:52.801 BaseBdev2 00:22:52.801 21:45:13 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:52.801 21:45:13 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:52.801 21:45:13 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:53.060 21:45:13 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:53.060 [2024-12-06 21:45:13.489062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:53.060 [2024-12-06 21:45:13.489126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.060 [2024-12-06 21:45:13.489159] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b780 00:22:53.060 [2024-12-06 21:45:13.489172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.061 [2024-12-06 21:45:13.489602] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.061 [2024-12-06 21:45:13.489624] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:53.061 [2024-12-06 21:45:13.489709] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:53.061 [2024-12-06 21:45:13.489735] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:53.061 BaseBdev3 00:22:53.061 21:45:13 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:53.320 21:45:13 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:53.580 [2024-12-06 21:45:13.853158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:53.580 [2024-12-06 21:45:13.853218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:53.580 [2024-12-06 21:45:13.853247] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ba80 00:22:53.580 [2024-12-06 21:45:13.853260] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:53.580 [2024-12-06 21:45:13.853767] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:53.580 [2024-12-06 21:45:13.853813] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:53.580 [2024-12-06 21:45:13.853944] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:53.580 [2024-12-06 21:45:13.853972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:53.580 spare 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:53.580 21:45:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:53.580 [2024-12-06 21:45:13.954108] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000b480 00:22:53.580 [2024-12-06 21:45:13.954153] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:22:53.580 [2024-12-06 21:45:13.954305] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000046fb0 00:22:53.580 [2024-12-06 21:45:13.958533] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000b480 00:22:53.580 [2024-12-06 21:45:13.958565] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000b480 00:22:53.580 [2024-12-06 21:45:13.958773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:53.840 21:45:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:53.840 "name": "raid_bdev1", 00:22:53.840 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:53.840 "strip_size_kb": 64, 00:22:53.840 "state": "online", 00:22:53.840 "raid_level": "raid5f", 00:22:53.840 "superblock": true, 00:22:53.840 "num_base_bdevs": 3, 00:22:53.840 "num_base_bdevs_discovered": 3, 00:22:53.840 "num_base_bdevs_operational": 3, 00:22:53.840 "base_bdevs_list": [ 00:22:53.840 { 00:22:53.840 "name": "spare", 00:22:53.840 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:53.840 "is_configured": true, 00:22:53.840 "data_offset": 2048, 00:22:53.840 "data_size": 63488 00:22:53.840 }, 00:22:53.840 { 00:22:53.840 "name": "BaseBdev2", 00:22:53.840 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:53.840 "is_configured": true, 00:22:53.840 "data_offset": 2048, 00:22:53.840 "data_size": 63488 00:22:53.840 }, 00:22:53.840 { 00:22:53.840 "name": "BaseBdev3", 00:22:53.840 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:53.840 "is_configured": true, 00:22:53.840 "data_offset": 2048, 00:22:53.840 "data_size": 63488 00:22:53.840 } 00:22:53.840 ] 00:22:53.840 }' 00:22:53.840 21:45:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:53.840 21:45:14 -- common/autotest_common.sh@10 -- # set +x 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.099 21:45:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:54.359 "name": "raid_bdev1", 00:22:54.359 "uuid": "f48f55f6-f127-40b7-925b-75dcf30ce41b", 00:22:54.359 "strip_size_kb": 64, 00:22:54.359 "state": "online", 00:22:54.359 "raid_level": "raid5f", 00:22:54.359 "superblock": true, 00:22:54.359 "num_base_bdevs": 3, 00:22:54.359 "num_base_bdevs_discovered": 3, 00:22:54.359 "num_base_bdevs_operational": 3, 00:22:54.359 "base_bdevs_list": [ 00:22:54.359 { 00:22:54.359 "name": "spare", 00:22:54.359 "uuid": "119f811c-1d57-57a0-ac41-eab7216d6ee4", 00:22:54.359 "is_configured": true, 00:22:54.359 "data_offset": 2048, 00:22:54.359 "data_size": 63488 00:22:54.359 }, 00:22:54.359 { 00:22:54.359 "name": "BaseBdev2", 00:22:54.359 "uuid": "f6029920-ea9c-5dc1-ad13-1d95835c23a4", 00:22:54.359 "is_configured": true, 00:22:54.359 "data_offset": 2048, 00:22:54.359 "data_size": 63488 00:22:54.359 }, 00:22:54.359 { 00:22:54.359 "name": "BaseBdev3", 00:22:54.359 "uuid": "18813ae8-70c2-5f1d-bb4b-6fe3e4120ab6", 00:22:54.359 "is_configured": true, 00:22:54.359 "data_offset": 2048, 00:22:54.359 "data_size": 63488 00:22:54.359 } 00:22:54.359 ] 00:22:54.359 }' 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:54.359 21:45:14 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:54.620 21:45:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:54.620 21:45:14 -- bdev/bdev_raid.sh@709 -- # killprocess 83818 00:22:54.620 21:45:14 -- common/autotest_common.sh@936 -- # '[' -z 83818 ']' 00:22:54.620 21:45:14 -- common/autotest_common.sh@940 -- # kill -0 83818 00:22:54.620 21:45:14 -- common/autotest_common.sh@941 -- # uname 00:22:54.620 21:45:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:54.620 21:45:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83818 00:22:54.620 killing process with pid 83818 00:22:54.620 Received shutdown signal, test time was about 60.000000 seconds 00:22:54.620 00:22:54.620 Latency(us) 00:22:54.620 [2024-12-06T21:45:15.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.620 [2024-12-06T21:45:15.117Z] =================================================================================================================== 00:22:54.620 [2024-12-06T21:45:15.117Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:54.620 21:45:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:54.620 21:45:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:54.620 21:45:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83818' 00:22:54.620 21:45:14 -- common/autotest_common.sh@955 -- # kill 83818 00:22:54.620 [2024-12-06 21:45:14.911788] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:54.620 21:45:14 -- common/autotest_common.sh@960 -- # wait 83818 00:22:54.620 [2024-12-06 21:45:14.911880] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:54.620 [2024-12-06 21:45:14.911979] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:54.620 [2024-12-06 21:45:14.912006] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000b480 name raid_bdev1, state offline 00:22:54.878 [2024-12-06 21:45:15.172370] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:55.814 00:22:55.814 real 0m21.074s 00:22:55.814 user 0m31.129s 00:22:55.814 sys 0m2.664s 00:22:55.814 ************************************ 00:22:55.814 END TEST raid5f_rebuild_test_sb 00:22:55.814 ************************************ 00:22:55.814 21:45:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:55.814 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:22:55.814 21:45:16 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:55.814 21:45:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:55.814 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 ************************************ 00:22:55.814 START TEST raid5f_state_function_test 00:22:55.814 ************************************ 00:22:55.814 21:45:16 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 false 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=84390 00:22:55.814 Process raid pid: 84390 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84390' 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84390 /var/tmp/spdk-raid.sock 00:22:55.814 21:45:16 -- common/autotest_common.sh@829 -- # '[' -z 84390 ']' 00:22:55.814 21:45:16 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:55.814 21:45:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:55.814 21:45:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:55.814 21:45:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:55.814 21:45:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.814 21:45:16 -- common/autotest_common.sh@10 -- # set +x 00:22:55.814 [2024-12-06 21:45:16.180926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:55.814 [2024-12-06 21:45:16.181071] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.072 [2024-12-06 21:45:16.348094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.072 [2024-12-06 21:45:16.501419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.332 [2024-12-06 21:45:16.647951] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:56.899 21:45:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.899 21:45:17 -- common/autotest_common.sh@862 -- # return 0 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:56.899 [2024-12-06 21:45:17.322846] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.899 [2024-12-06 21:45:17.322911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.899 [2024-12-06 21:45:17.322925] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.899 [2024-12-06 21:45:17.322939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.899 [2024-12-06 21:45:17.322950] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.899 [2024-12-06 21:45:17.322962] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.899 [2024-12-06 21:45:17.322976] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:56.899 [2024-12-06 21:45:17.322987] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:56.899 21:45:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.157 21:45:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.157 "name": "Existed_Raid", 00:22:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.157 "strip_size_kb": 64, 00:22:57.157 "state": "configuring", 00:22:57.157 "raid_level": "raid5f", 00:22:57.157 "superblock": false, 00:22:57.157 "num_base_bdevs": 4, 00:22:57.157 "num_base_bdevs_discovered": 0, 00:22:57.157 "num_base_bdevs_operational": 4, 00:22:57.157 "base_bdevs_list": [ 00:22:57.157 { 00:22:57.157 "name": "BaseBdev1", 00:22:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.157 "is_configured": false, 00:22:57.157 "data_offset": 0, 00:22:57.157 "data_size": 0 00:22:57.157 }, 00:22:57.157 { 00:22:57.157 "name": "BaseBdev2", 00:22:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.157 "is_configured": false, 00:22:57.157 "data_offset": 0, 00:22:57.157 "data_size": 0 00:22:57.157 }, 00:22:57.157 { 00:22:57.157 "name": "BaseBdev3", 00:22:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.157 "is_configured": false, 00:22:57.157 "data_offset": 0, 00:22:57.157 "data_size": 0 00:22:57.157 }, 00:22:57.157 { 00:22:57.157 "name": "BaseBdev4", 00:22:57.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.157 "is_configured": false, 00:22:57.157 "data_offset": 0, 00:22:57.157 "data_size": 0 00:22:57.157 } 00:22:57.157 ] 00:22:57.157 }' 00:22:57.157 21:45:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.157 21:45:17 -- common/autotest_common.sh@10 -- # set +x 00:22:57.415 21:45:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:57.673 [2024-12-06 21:45:17.991035] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:57.673 [2024-12-06 21:45:17.991078] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:22:57.673 21:45:18 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:57.931 [2024-12-06 21:45:18.211112] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:57.931 [2024-12-06 21:45:18.211160] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:57.931 [2024-12-06 21:45:18.211172] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:57.931 [2024-12-06 21:45:18.211185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:57.931 [2024-12-06 21:45:18.211193] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:57.931 [2024-12-06 21:45:18.211204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:57.931 [2024-12-06 21:45:18.211211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:57.931 [2024-12-06 21:45:18.211222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:57.931 21:45:18 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:57.931 [2024-12-06 21:45:18.419855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:57.931 BaseBdev1 00:22:58.190 21:45:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:58.190 21:45:18 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:22:58.190 21:45:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:22:58.190 21:45:18 -- common/autotest_common.sh@899 -- # local i 00:22:58.190 21:45:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:22:58.190 21:45:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:22:58.190 21:45:18 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:58.190 21:45:18 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:58.483 [ 00:22:58.483 { 00:22:58.483 "name": "BaseBdev1", 00:22:58.483 "aliases": [ 00:22:58.483 "7185aec3-b36a-4ec8-a128-b800d96123c5" 00:22:58.483 ], 00:22:58.483 "product_name": "Malloc disk", 00:22:58.483 "block_size": 512, 00:22:58.483 "num_blocks": 65536, 00:22:58.483 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:22:58.483 "assigned_rate_limits": { 00:22:58.483 "rw_ios_per_sec": 0, 00:22:58.483 "rw_mbytes_per_sec": 0, 00:22:58.483 "r_mbytes_per_sec": 0, 00:22:58.483 "w_mbytes_per_sec": 0 00:22:58.483 }, 00:22:58.483 "claimed": true, 00:22:58.483 "claim_type": "exclusive_write", 00:22:58.483 "zoned": false, 00:22:58.483 "supported_io_types": { 00:22:58.483 "read": true, 00:22:58.483 "write": true, 00:22:58.483 "unmap": true, 00:22:58.483 "write_zeroes": true, 00:22:58.483 "flush": true, 00:22:58.483 "reset": true, 00:22:58.483 "compare": false, 00:22:58.483 "compare_and_write": false, 00:22:58.483 "abort": true, 00:22:58.483 "nvme_admin": false, 00:22:58.483 "nvme_io": false 00:22:58.483 }, 00:22:58.483 "memory_domains": [ 00:22:58.483 { 00:22:58.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:58.483 "dma_device_type": 2 00:22:58.483 } 00:22:58.483 ], 00:22:58.483 "driver_specific": {} 00:22:58.483 } 00:22:58.483 ] 00:22:58.483 21:45:18 -- common/autotest_common.sh@905 -- # return 0 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:58.483 21:45:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:58.741 21:45:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:58.741 "name": "Existed_Raid", 00:22:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.741 "strip_size_kb": 64, 00:22:58.741 "state": "configuring", 00:22:58.741 "raid_level": "raid5f", 00:22:58.741 "superblock": false, 00:22:58.741 "num_base_bdevs": 4, 00:22:58.741 "num_base_bdevs_discovered": 1, 00:22:58.741 "num_base_bdevs_operational": 4, 00:22:58.741 "base_bdevs_list": [ 00:22:58.741 { 00:22:58.741 "name": "BaseBdev1", 00:22:58.741 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:22:58.741 "is_configured": true, 00:22:58.741 "data_offset": 0, 00:22:58.741 "data_size": 65536 00:22:58.741 }, 00:22:58.741 { 00:22:58.741 "name": "BaseBdev2", 00:22:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.741 "is_configured": false, 00:22:58.741 "data_offset": 0, 00:22:58.741 "data_size": 0 00:22:58.741 }, 00:22:58.741 { 00:22:58.741 "name": "BaseBdev3", 00:22:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.741 "is_configured": false, 00:22:58.741 "data_offset": 0, 00:22:58.741 "data_size": 0 00:22:58.741 }, 00:22:58.741 { 00:22:58.741 "name": "BaseBdev4", 00:22:58.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:58.741 "is_configured": false, 00:22:58.741 "data_offset": 0, 00:22:58.741 "data_size": 0 00:22:58.741 } 00:22:58.741 ] 00:22:58.741 }' 00:22:58.741 21:45:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:58.741 21:45:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.000 21:45:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:59.259 [2024-12-06 21:45:19.532208] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:59.259 [2024-12-06 21:45:19.532278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:22:59.259 [2024-12-06 21:45:19.720355] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.259 [2024-12-06 21:45:19.722111] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.259 [2024-12-06 21:45:19.722152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.259 [2024-12-06 21:45:19.722164] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.259 [2024-12-06 21:45:19.722177] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.259 [2024-12-06 21:45:19.722185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:22:59.259 [2024-12-06 21:45:19.722198] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.259 21:45:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.518 21:45:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.518 "name": "Existed_Raid", 00:22:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.518 "strip_size_kb": 64, 00:22:59.518 "state": "configuring", 00:22:59.518 "raid_level": "raid5f", 00:22:59.518 "superblock": false, 00:22:59.518 "num_base_bdevs": 4, 00:22:59.518 "num_base_bdevs_discovered": 1, 00:22:59.518 "num_base_bdevs_operational": 4, 00:22:59.518 "base_bdevs_list": [ 00:22:59.518 { 00:22:59.518 "name": "BaseBdev1", 00:22:59.518 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:22:59.518 "is_configured": true, 00:22:59.518 "data_offset": 0, 00:22:59.518 "data_size": 65536 00:22:59.518 }, 00:22:59.518 { 00:22:59.518 "name": "BaseBdev2", 00:22:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.518 "is_configured": false, 00:22:59.518 "data_offset": 0, 00:22:59.518 "data_size": 0 00:22:59.518 }, 00:22:59.518 { 00:22:59.518 "name": "BaseBdev3", 00:22:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.518 "is_configured": false, 00:22:59.518 "data_offset": 0, 00:22:59.518 "data_size": 0 00:22:59.518 }, 00:22:59.518 { 00:22:59.518 "name": "BaseBdev4", 00:22:59.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.518 "is_configured": false, 00:22:59.518 "data_offset": 0, 00:22:59.518 "data_size": 0 00:22:59.518 } 00:22:59.518 ] 00:22:59.518 }' 00:22:59.518 21:45:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.518 21:45:19 -- common/autotest_common.sh@10 -- # set +x 00:22:59.775 21:45:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.033 [2024-12-06 21:45:20.377585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.033 BaseBdev2 00:23:00.033 21:45:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:00.033 21:45:20 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:00.033 21:45:20 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:00.033 21:45:20 -- common/autotest_common.sh@899 -- # local i 00:23:00.033 21:45:20 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:00.033 21:45:20 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:00.033 21:45:20 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.291 21:45:20 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:00.291 [ 00:23:00.291 { 00:23:00.291 "name": "BaseBdev2", 00:23:00.291 "aliases": [ 00:23:00.291 "32363f1d-65b7-4f5f-9b83-783b2bc4a0df" 00:23:00.291 ], 00:23:00.291 "product_name": "Malloc disk", 00:23:00.291 "block_size": 512, 00:23:00.291 "num_blocks": 65536, 00:23:00.291 "uuid": "32363f1d-65b7-4f5f-9b83-783b2bc4a0df", 00:23:00.292 "assigned_rate_limits": { 00:23:00.292 "rw_ios_per_sec": 0, 00:23:00.292 "rw_mbytes_per_sec": 0, 00:23:00.292 "r_mbytes_per_sec": 0, 00:23:00.292 "w_mbytes_per_sec": 0 00:23:00.292 }, 00:23:00.292 "claimed": true, 00:23:00.292 "claim_type": "exclusive_write", 00:23:00.292 "zoned": false, 00:23:00.292 "supported_io_types": { 00:23:00.292 "read": true, 00:23:00.292 "write": true, 00:23:00.292 "unmap": true, 00:23:00.292 "write_zeroes": true, 00:23:00.292 "flush": true, 00:23:00.292 "reset": true, 00:23:00.292 "compare": false, 00:23:00.292 "compare_and_write": false, 00:23:00.292 "abort": true, 00:23:00.292 "nvme_admin": false, 00:23:00.292 "nvme_io": false 00:23:00.292 }, 00:23:00.292 "memory_domains": [ 00:23:00.292 { 00:23:00.292 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:00.292 "dma_device_type": 2 00:23:00.292 } 00:23:00.292 ], 00:23:00.292 "driver_specific": {} 00:23:00.292 } 00:23:00.292 ] 00:23:00.292 21:45:20 -- common/autotest_common.sh@905 -- # return 0 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:00.292 21:45:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:00.551 21:45:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:00.551 "name": "Existed_Raid", 00:23:00.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.551 "strip_size_kb": 64, 00:23:00.551 "state": "configuring", 00:23:00.551 "raid_level": "raid5f", 00:23:00.551 "superblock": false, 00:23:00.551 "num_base_bdevs": 4, 00:23:00.551 "num_base_bdevs_discovered": 2, 00:23:00.551 "num_base_bdevs_operational": 4, 00:23:00.551 "base_bdevs_list": [ 00:23:00.551 { 00:23:00.551 "name": "BaseBdev1", 00:23:00.551 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev2", 00:23:00.551 "uuid": "32363f1d-65b7-4f5f-9b83-783b2bc4a0df", 00:23:00.551 "is_configured": true, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 65536 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev3", 00:23:00.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.551 "is_configured": false, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 0 00:23:00.551 }, 00:23:00.551 { 00:23:00.551 "name": "BaseBdev4", 00:23:00.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:00.551 "is_configured": false, 00:23:00.551 "data_offset": 0, 00:23:00.551 "data_size": 0 00:23:00.551 } 00:23:00.551 ] 00:23:00.551 }' 00:23:00.551 21:45:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:00.551 21:45:20 -- common/autotest_common.sh@10 -- # set +x 00:23:00.810 21:45:21 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:01.069 [2024-12-06 21:45:21.502588] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:01.069 BaseBdev3 00:23:01.069 21:45:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:01.069 21:45:21 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:01.069 21:45:21 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:01.069 21:45:21 -- common/autotest_common.sh@899 -- # local i 00:23:01.069 21:45:21 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:01.069 21:45:21 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:01.069 21:45:21 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:01.328 21:45:21 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:01.586 [ 00:23:01.586 { 00:23:01.586 "name": "BaseBdev3", 00:23:01.586 "aliases": [ 00:23:01.586 "ad8d4cd4-9bfa-42c4-870c-0b1ceb7e4017" 00:23:01.586 ], 00:23:01.586 "product_name": "Malloc disk", 00:23:01.586 "block_size": 512, 00:23:01.586 "num_blocks": 65536, 00:23:01.586 "uuid": "ad8d4cd4-9bfa-42c4-870c-0b1ceb7e4017", 00:23:01.586 "assigned_rate_limits": { 00:23:01.586 "rw_ios_per_sec": 0, 00:23:01.586 "rw_mbytes_per_sec": 0, 00:23:01.586 "r_mbytes_per_sec": 0, 00:23:01.586 "w_mbytes_per_sec": 0 00:23:01.586 }, 00:23:01.586 "claimed": true, 00:23:01.586 "claim_type": "exclusive_write", 00:23:01.586 "zoned": false, 00:23:01.586 "supported_io_types": { 00:23:01.586 "read": true, 00:23:01.586 "write": true, 00:23:01.586 "unmap": true, 00:23:01.586 "write_zeroes": true, 00:23:01.586 "flush": true, 00:23:01.586 "reset": true, 00:23:01.586 "compare": false, 00:23:01.586 "compare_and_write": false, 00:23:01.586 "abort": true, 00:23:01.586 "nvme_admin": false, 00:23:01.586 "nvme_io": false 00:23:01.586 }, 00:23:01.586 "memory_domains": [ 00:23:01.586 { 00:23:01.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.586 "dma_device_type": 2 00:23:01.586 } 00:23:01.586 ], 00:23:01.586 "driver_specific": {} 00:23:01.587 } 00:23:01.587 ] 00:23:01.587 21:45:21 -- common/autotest_common.sh@905 -- # return 0 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.587 21:45:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.846 21:45:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.846 "name": "Existed_Raid", 00:23:01.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.846 "strip_size_kb": 64, 00:23:01.846 "state": "configuring", 00:23:01.846 "raid_level": "raid5f", 00:23:01.846 "superblock": false, 00:23:01.846 "num_base_bdevs": 4, 00:23:01.846 "num_base_bdevs_discovered": 3, 00:23:01.846 "num_base_bdevs_operational": 4, 00:23:01.846 "base_bdevs_list": [ 00:23:01.846 { 00:23:01.846 "name": "BaseBdev1", 00:23:01.846 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:23:01.846 "is_configured": true, 00:23:01.846 "data_offset": 0, 00:23:01.846 "data_size": 65536 00:23:01.846 }, 00:23:01.846 { 00:23:01.846 "name": "BaseBdev2", 00:23:01.846 "uuid": "32363f1d-65b7-4f5f-9b83-783b2bc4a0df", 00:23:01.846 "is_configured": true, 00:23:01.846 "data_offset": 0, 00:23:01.846 "data_size": 65536 00:23:01.846 }, 00:23:01.846 { 00:23:01.846 "name": "BaseBdev3", 00:23:01.846 "uuid": "ad8d4cd4-9bfa-42c4-870c-0b1ceb7e4017", 00:23:01.846 "is_configured": true, 00:23:01.846 "data_offset": 0, 00:23:01.846 "data_size": 65536 00:23:01.846 }, 00:23:01.846 { 00:23:01.846 "name": "BaseBdev4", 00:23:01.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.846 "is_configured": false, 00:23:01.846 "data_offset": 0, 00:23:01.846 "data_size": 0 00:23:01.846 } 00:23:01.846 ] 00:23:01.846 }' 00:23:01.846 21:45:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.846 21:45:22 -- common/autotest_common.sh@10 -- # set +x 00:23:02.105 21:45:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:02.365 [2024-12-06 21:45:22.666782] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:02.365 [2024-12-06 21:45:22.666844] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000006f80 00:23:02.365 [2024-12-06 21:45:22.666864] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:02.365 [2024-12-06 21:45:22.666973] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:02.365 [2024-12-06 21:45:22.672696] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000006f80 00:23:02.365 [2024-12-06 21:45:22.672727] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000006f80 00:23:02.365 [2024-12-06 21:45:22.673011] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.365 BaseBdev4 00:23:02.365 21:45:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:02.365 21:45:22 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:02.365 21:45:22 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:02.365 21:45:22 -- common/autotest_common.sh@899 -- # local i 00:23:02.365 21:45:22 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:02.365 21:45:22 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:02.365 21:45:22 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.624 21:45:22 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:02.624 [ 00:23:02.624 { 00:23:02.624 "name": "BaseBdev4", 00:23:02.624 "aliases": [ 00:23:02.624 "869780e6-cc34-425e-8a84-344df0a4aeb7" 00:23:02.624 ], 00:23:02.624 "product_name": "Malloc disk", 00:23:02.624 "block_size": 512, 00:23:02.624 "num_blocks": 65536, 00:23:02.624 "uuid": "869780e6-cc34-425e-8a84-344df0a4aeb7", 00:23:02.624 "assigned_rate_limits": { 00:23:02.624 "rw_ios_per_sec": 0, 00:23:02.624 "rw_mbytes_per_sec": 0, 00:23:02.624 "r_mbytes_per_sec": 0, 00:23:02.624 "w_mbytes_per_sec": 0 00:23:02.624 }, 00:23:02.624 "claimed": true, 00:23:02.624 "claim_type": "exclusive_write", 00:23:02.624 "zoned": false, 00:23:02.624 "supported_io_types": { 00:23:02.624 "read": true, 00:23:02.624 "write": true, 00:23:02.624 "unmap": true, 00:23:02.624 "write_zeroes": true, 00:23:02.624 "flush": true, 00:23:02.624 "reset": true, 00:23:02.624 "compare": false, 00:23:02.624 "compare_and_write": false, 00:23:02.624 "abort": true, 00:23:02.624 "nvme_admin": false, 00:23:02.624 "nvme_io": false 00:23:02.624 }, 00:23:02.624 "memory_domains": [ 00:23:02.624 { 00:23:02.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.624 "dma_device_type": 2 00:23:02.624 } 00:23:02.624 ], 00:23:02.624 "driver_specific": {} 00:23:02.624 } 00:23:02.624 ] 00:23:02.624 21:45:23 -- common/autotest_common.sh@905 -- # return 0 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.624 21:45:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.883 21:45:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.883 "name": "Existed_Raid", 00:23:02.883 "uuid": "c29fb69f-46df-41f4-b979-3719d91a22a0", 00:23:02.883 "strip_size_kb": 64, 00:23:02.883 "state": "online", 00:23:02.883 "raid_level": "raid5f", 00:23:02.883 "superblock": false, 00:23:02.883 "num_base_bdevs": 4, 00:23:02.883 "num_base_bdevs_discovered": 4, 00:23:02.883 "num_base_bdevs_operational": 4, 00:23:02.883 "base_bdevs_list": [ 00:23:02.883 { 00:23:02.884 "name": "BaseBdev1", 00:23:02.884 "uuid": "7185aec3-b36a-4ec8-a128-b800d96123c5", 00:23:02.884 "is_configured": true, 00:23:02.884 "data_offset": 0, 00:23:02.884 "data_size": 65536 00:23:02.884 }, 00:23:02.884 { 00:23:02.884 "name": "BaseBdev2", 00:23:02.884 "uuid": "32363f1d-65b7-4f5f-9b83-783b2bc4a0df", 00:23:02.884 "is_configured": true, 00:23:02.884 "data_offset": 0, 00:23:02.884 "data_size": 65536 00:23:02.884 }, 00:23:02.884 { 00:23:02.884 "name": "BaseBdev3", 00:23:02.884 "uuid": "ad8d4cd4-9bfa-42c4-870c-0b1ceb7e4017", 00:23:02.884 "is_configured": true, 00:23:02.884 "data_offset": 0, 00:23:02.884 "data_size": 65536 00:23:02.884 }, 00:23:02.884 { 00:23:02.884 "name": "BaseBdev4", 00:23:02.884 "uuid": "869780e6-cc34-425e-8a84-344df0a4aeb7", 00:23:02.884 "is_configured": true, 00:23:02.884 "data_offset": 0, 00:23:02.884 "data_size": 65536 00:23:02.884 } 00:23:02.884 ] 00:23:02.884 }' 00:23:02.884 21:45:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.884 21:45:23 -- common/autotest_common.sh@10 -- # set +x 00:23:03.142 21:45:23 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:03.401 [2024-12-06 21:45:23.779145] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.401 21:45:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.660 21:45:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.660 "name": "Existed_Raid", 00:23:03.660 "uuid": "c29fb69f-46df-41f4-b979-3719d91a22a0", 00:23:03.660 "strip_size_kb": 64, 00:23:03.660 "state": "online", 00:23:03.660 "raid_level": "raid5f", 00:23:03.660 "superblock": false, 00:23:03.660 "num_base_bdevs": 4, 00:23:03.660 "num_base_bdevs_discovered": 3, 00:23:03.660 "num_base_bdevs_operational": 3, 00:23:03.660 "base_bdevs_list": [ 00:23:03.660 { 00:23:03.660 "name": null, 00:23:03.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.660 "is_configured": false, 00:23:03.660 "data_offset": 0, 00:23:03.661 "data_size": 65536 00:23:03.661 }, 00:23:03.661 { 00:23:03.661 "name": "BaseBdev2", 00:23:03.661 "uuid": "32363f1d-65b7-4f5f-9b83-783b2bc4a0df", 00:23:03.661 "is_configured": true, 00:23:03.661 "data_offset": 0, 00:23:03.661 "data_size": 65536 00:23:03.661 }, 00:23:03.661 { 00:23:03.661 "name": "BaseBdev3", 00:23:03.661 "uuid": "ad8d4cd4-9bfa-42c4-870c-0b1ceb7e4017", 00:23:03.661 "is_configured": true, 00:23:03.661 "data_offset": 0, 00:23:03.661 "data_size": 65536 00:23:03.661 }, 00:23:03.661 { 00:23:03.661 "name": "BaseBdev4", 00:23:03.661 "uuid": "869780e6-cc34-425e-8a84-344df0a4aeb7", 00:23:03.661 "is_configured": true, 00:23:03.661 "data_offset": 0, 00:23:03.661 "data_size": 65536 00:23:03.661 } 00:23:03.661 ] 00:23:03.661 }' 00:23:03.661 21:45:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.661 21:45:24 -- common/autotest_common.sh@10 -- # set +x 00:23:03.920 21:45:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:03.920 21:45:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:03.920 21:45:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.920 21:45:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:04.178 21:45:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:04.178 21:45:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.178 21:45:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:04.438 [2024-12-06 21:45:24.721002] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:04.438 [2024-12-06 21:45:24.721054] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.438 [2024-12-06 21:45:24.721110] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:04.438 21:45:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:04.438 21:45:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:04.438 21:45:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:04.438 21:45:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.708 21:45:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:04.708 21:45:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.708 21:45:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:04.708 [2024-12-06 21:45:25.144042] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.970 21:45:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:05.229 [2024-12-06 21:45:25.562529] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:05.229 [2024-12-06 21:45:25.562606] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006f80 name Existed_Raid, state offline 00:23:05.229 21:45:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:05.229 21:45:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:05.229 21:45:25 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.229 21:45:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:05.487 21:45:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:05.487 21:45:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:05.487 21:45:25 -- bdev/bdev_raid.sh@287 -- # killprocess 84390 00:23:05.487 21:45:25 -- common/autotest_common.sh@936 -- # '[' -z 84390 ']' 00:23:05.487 21:45:25 -- common/autotest_common.sh@940 -- # kill -0 84390 00:23:05.487 21:45:25 -- common/autotest_common.sh@941 -- # uname 00:23:05.487 21:45:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:05.487 21:45:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84390 00:23:05.487 21:45:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:05.487 killing process with pid 84390 00:23:05.487 21:45:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:05.487 21:45:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84390' 00:23:05.487 21:45:25 -- common/autotest_common.sh@955 -- # kill 84390 00:23:05.487 [2024-12-06 21:45:25.866490] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.487 21:45:25 -- common/autotest_common.sh@960 -- # wait 84390 00:23:05.487 [2024-12-06 21:45:25.866614] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:06.423 00:23:06.423 real 0m10.676s 00:23:06.423 user 0m17.846s 00:23:06.423 sys 0m1.609s 00:23:06.423 21:45:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:06.423 ************************************ 00:23:06.423 END TEST raid5f_state_function_test 00:23:06.423 21:45:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.423 ************************************ 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:23:06.423 21:45:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:06.423 21:45:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:06.423 21:45:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.423 ************************************ 00:23:06.423 START TEST raid5f_state_function_test_sb 00:23:06.423 ************************************ 00:23:06.423 21:45:26 -- common/autotest_common.sh@1114 -- # raid_state_function_test raid5f 4 true 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@226 -- # raid_pid=84763 00:23:06.423 Process raid pid: 84763 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 84763' 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@228 -- # waitforlisten 84763 /var/tmp/spdk-raid.sock 00:23:06.423 21:45:26 -- common/autotest_common.sh@829 -- # '[' -z 84763 ']' 00:23:06.423 21:45:26 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:23:06.423 21:45:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:06.423 21:45:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:06.423 21:45:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:06.423 21:45:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.423 21:45:26 -- common/autotest_common.sh@10 -- # set +x 00:23:06.423 [2024-12-06 21:45:26.910862] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:06.423 [2024-12-06 21:45:26.911026] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.683 [2024-12-06 21:45:27.079366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.942 [2024-12-06 21:45:27.233359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.942 [2024-12-06 21:45:27.382601] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.511 21:45:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.511 21:45:27 -- common/autotest_common.sh@862 -- # return 0 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:07.511 [2024-12-06 21:45:27.942502] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:07.511 [2024-12-06 21:45:27.942570] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:07.511 [2024-12-06 21:45:27.942583] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:07.511 [2024-12-06 21:45:27.942597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:07.511 [2024-12-06 21:45:27.942605] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:07.511 [2024-12-06 21:45:27.942616] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:07.511 [2024-12-06 21:45:27.942624] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:07.511 [2024-12-06 21:45:27.942651] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:07.511 21:45:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:07.769 21:45:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:07.769 "name": "Existed_Raid", 00:23:07.769 "uuid": "9c40f2b0-d256-40df-a1b5-3132885eb28e", 00:23:07.769 "strip_size_kb": 64, 00:23:07.769 "state": "configuring", 00:23:07.769 "raid_level": "raid5f", 00:23:07.769 "superblock": true, 00:23:07.769 "num_base_bdevs": 4, 00:23:07.769 "num_base_bdevs_discovered": 0, 00:23:07.769 "num_base_bdevs_operational": 4, 00:23:07.769 "base_bdevs_list": [ 00:23:07.769 { 00:23:07.769 "name": "BaseBdev1", 00:23:07.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.770 "is_configured": false, 00:23:07.770 "data_offset": 0, 00:23:07.770 "data_size": 0 00:23:07.770 }, 00:23:07.770 { 00:23:07.770 "name": "BaseBdev2", 00:23:07.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.770 "is_configured": false, 00:23:07.770 "data_offset": 0, 00:23:07.770 "data_size": 0 00:23:07.770 }, 00:23:07.770 { 00:23:07.770 "name": "BaseBdev3", 00:23:07.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.770 "is_configured": false, 00:23:07.770 "data_offset": 0, 00:23:07.770 "data_size": 0 00:23:07.770 }, 00:23:07.770 { 00:23:07.770 "name": "BaseBdev4", 00:23:07.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:07.770 "is_configured": false, 00:23:07.770 "data_offset": 0, 00:23:07.770 "data_size": 0 00:23:07.770 } 00:23:07.770 ] 00:23:07.770 }' 00:23:07.770 21:45:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:07.770 21:45:28 -- common/autotest_common.sh@10 -- # set +x 00:23:08.028 21:45:28 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:08.288 [2024-12-06 21:45:28.714536] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:08.288 [2024-12-06 21:45:28.714595] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006380 name Existed_Raid, state configuring 00:23:08.288 21:45:28 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:08.559 [2024-12-06 21:45:28.898666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:23:08.559 [2024-12-06 21:45:28.898749] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:23:08.559 [2024-12-06 21:45:28.898763] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:08.559 [2024-12-06 21:45:28.898776] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:08.559 [2024-12-06 21:45:28.898784] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:08.559 [2024-12-06 21:45:28.898796] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:08.559 [2024-12-06 21:45:28.898803] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:08.560 [2024-12-06 21:45:28.898815] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:08.560 21:45:28 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:08.827 [2024-12-06 21:45:29.107922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:08.827 BaseBdev1 00:23:08.827 21:45:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:23:08.827 21:45:29 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:08.827 21:45:29 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:08.827 21:45:29 -- common/autotest_common.sh@899 -- # local i 00:23:08.827 21:45:29 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:08.827 21:45:29 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:08.827 21:45:29 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:08.827 21:45:29 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:09.085 [ 00:23:09.085 { 00:23:09.085 "name": "BaseBdev1", 00:23:09.085 "aliases": [ 00:23:09.085 "f9d9d693-93d0-416f-bf39-ef546ae59825" 00:23:09.085 ], 00:23:09.085 "product_name": "Malloc disk", 00:23:09.085 "block_size": 512, 00:23:09.085 "num_blocks": 65536, 00:23:09.085 "uuid": "f9d9d693-93d0-416f-bf39-ef546ae59825", 00:23:09.085 "assigned_rate_limits": { 00:23:09.085 "rw_ios_per_sec": 0, 00:23:09.085 "rw_mbytes_per_sec": 0, 00:23:09.085 "r_mbytes_per_sec": 0, 00:23:09.085 "w_mbytes_per_sec": 0 00:23:09.085 }, 00:23:09.085 "claimed": true, 00:23:09.085 "claim_type": "exclusive_write", 00:23:09.085 "zoned": false, 00:23:09.085 "supported_io_types": { 00:23:09.085 "read": true, 00:23:09.085 "write": true, 00:23:09.085 "unmap": true, 00:23:09.085 "write_zeroes": true, 00:23:09.085 "flush": true, 00:23:09.085 "reset": true, 00:23:09.085 "compare": false, 00:23:09.085 "compare_and_write": false, 00:23:09.085 "abort": true, 00:23:09.085 "nvme_admin": false, 00:23:09.085 "nvme_io": false 00:23:09.085 }, 00:23:09.085 "memory_domains": [ 00:23:09.085 { 00:23:09.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:09.085 "dma_device_type": 2 00:23:09.085 } 00:23:09.085 ], 00:23:09.085 "driver_specific": {} 00:23:09.085 } 00:23:09.085 ] 00:23:09.085 21:45:29 -- common/autotest_common.sh@905 -- # return 0 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.085 21:45:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:09.342 21:45:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.342 "name": "Existed_Raid", 00:23:09.342 "uuid": "235e9347-f87b-4582-b151-d2d5bd80888b", 00:23:09.342 "strip_size_kb": 64, 00:23:09.342 "state": "configuring", 00:23:09.342 "raid_level": "raid5f", 00:23:09.342 "superblock": true, 00:23:09.342 "num_base_bdevs": 4, 00:23:09.342 "num_base_bdevs_discovered": 1, 00:23:09.342 "num_base_bdevs_operational": 4, 00:23:09.342 "base_bdevs_list": [ 00:23:09.342 { 00:23:09.342 "name": "BaseBdev1", 00:23:09.342 "uuid": "f9d9d693-93d0-416f-bf39-ef546ae59825", 00:23:09.342 "is_configured": true, 00:23:09.342 "data_offset": 2048, 00:23:09.342 "data_size": 63488 00:23:09.342 }, 00:23:09.342 { 00:23:09.342 "name": "BaseBdev2", 00:23:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.342 "is_configured": false, 00:23:09.342 "data_offset": 0, 00:23:09.342 "data_size": 0 00:23:09.342 }, 00:23:09.342 { 00:23:09.342 "name": "BaseBdev3", 00:23:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.342 "is_configured": false, 00:23:09.342 "data_offset": 0, 00:23:09.342 "data_size": 0 00:23:09.342 }, 00:23:09.342 { 00:23:09.342 "name": "BaseBdev4", 00:23:09.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:09.342 "is_configured": false, 00:23:09.342 "data_offset": 0, 00:23:09.342 "data_size": 0 00:23:09.342 } 00:23:09.342 ] 00:23:09.342 }' 00:23:09.342 21:45:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.342 21:45:29 -- common/autotest_common.sh@10 -- # set +x 00:23:09.599 21:45:29 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:23:09.857 [2024-12-06 21:45:30.208240] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:23:09.857 [2024-12-06 21:45:30.208327] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000006680 name Existed_Raid, state configuring 00:23:09.857 21:45:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:23:09.857 21:45:30 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:10.115 21:45:30 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:10.375 BaseBdev1 00:23:10.375 21:45:30 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:23:10.375 21:45:30 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:23:10.375 21:45:30 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:10.375 21:45:30 -- common/autotest_common.sh@899 -- # local i 00:23:10.375 21:45:30 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:10.375 21:45:30 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:10.375 21:45:30 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:10.636 21:45:30 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:23:10.636 [ 00:23:10.636 { 00:23:10.636 "name": "BaseBdev1", 00:23:10.636 "aliases": [ 00:23:10.636 "1be1463e-6f4e-41d2-94ee-385a7f40279d" 00:23:10.636 ], 00:23:10.636 "product_name": "Malloc disk", 00:23:10.636 "block_size": 512, 00:23:10.636 "num_blocks": 65536, 00:23:10.636 "uuid": "1be1463e-6f4e-41d2-94ee-385a7f40279d", 00:23:10.636 "assigned_rate_limits": { 00:23:10.636 "rw_ios_per_sec": 0, 00:23:10.636 "rw_mbytes_per_sec": 0, 00:23:10.636 "r_mbytes_per_sec": 0, 00:23:10.636 "w_mbytes_per_sec": 0 00:23:10.636 }, 00:23:10.636 "claimed": false, 00:23:10.636 "zoned": false, 00:23:10.636 "supported_io_types": { 00:23:10.636 "read": true, 00:23:10.636 "write": true, 00:23:10.636 "unmap": true, 00:23:10.636 "write_zeroes": true, 00:23:10.636 "flush": true, 00:23:10.636 "reset": true, 00:23:10.636 "compare": false, 00:23:10.636 "compare_and_write": false, 00:23:10.636 "abort": true, 00:23:10.636 "nvme_admin": false, 00:23:10.636 "nvme_io": false 00:23:10.636 }, 00:23:10.636 "memory_domains": [ 00:23:10.636 { 00:23:10.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:10.636 "dma_device_type": 2 00:23:10.636 } 00:23:10.636 ], 00:23:10.636 "driver_specific": {} 00:23:10.636 } 00:23:10.636 ] 00:23:10.636 21:45:31 -- common/autotest_common.sh@905 -- # return 0 00:23:10.636 21:45:31 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:23:10.895 [2024-12-06 21:45:31.259892] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:10.895 [2024-12-06 21:45:31.261704] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:23:10.895 [2024-12-06 21:45:31.261784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:23:10.895 [2024-12-06 21:45:31.261798] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:23:10.895 [2024-12-06 21:45:31.261812] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:23:10.895 [2024-12-06 21:45:31.261820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:23:10.895 [2024-12-06 21:45:31.261834] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:10.895 21:45:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.896 21:45:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:11.154 21:45:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:11.154 "name": "Existed_Raid", 00:23:11.154 "uuid": "7d848472-e3e8-491f-9865-a1da032eff65", 00:23:11.155 "strip_size_kb": 64, 00:23:11.155 "state": "configuring", 00:23:11.155 "raid_level": "raid5f", 00:23:11.155 "superblock": true, 00:23:11.155 "num_base_bdevs": 4, 00:23:11.155 "num_base_bdevs_discovered": 1, 00:23:11.155 "num_base_bdevs_operational": 4, 00:23:11.155 "base_bdevs_list": [ 00:23:11.155 { 00:23:11.155 "name": "BaseBdev1", 00:23:11.155 "uuid": "1be1463e-6f4e-41d2-94ee-385a7f40279d", 00:23:11.155 "is_configured": true, 00:23:11.155 "data_offset": 2048, 00:23:11.155 "data_size": 63488 00:23:11.155 }, 00:23:11.155 { 00:23:11.155 "name": "BaseBdev2", 00:23:11.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.155 "is_configured": false, 00:23:11.155 "data_offset": 0, 00:23:11.155 "data_size": 0 00:23:11.155 }, 00:23:11.155 { 00:23:11.155 "name": "BaseBdev3", 00:23:11.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.155 "is_configured": false, 00:23:11.155 "data_offset": 0, 00:23:11.155 "data_size": 0 00:23:11.155 }, 00:23:11.155 { 00:23:11.155 "name": "BaseBdev4", 00:23:11.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:11.155 "is_configured": false, 00:23:11.155 "data_offset": 0, 00:23:11.155 "data_size": 0 00:23:11.155 } 00:23:11.155 ] 00:23:11.155 }' 00:23:11.155 21:45:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:11.155 21:45:31 -- common/autotest_common.sh@10 -- # set +x 00:23:11.414 21:45:31 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:11.672 [2024-12-06 21:45:32.008487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:11.672 BaseBdev2 00:23:11.672 21:45:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:11.672 21:45:32 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:23:11.672 21:45:32 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:11.672 21:45:32 -- common/autotest_common.sh@899 -- # local i 00:23:11.672 21:45:32 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:11.672 21:45:32 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:11.672 21:45:32 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:11.932 21:45:32 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:11.932 [ 00:23:11.932 { 00:23:11.932 "name": "BaseBdev2", 00:23:11.932 "aliases": [ 00:23:11.932 "1c81c634-e102-4844-9969-e57cf0ce6e37" 00:23:11.932 ], 00:23:11.932 "product_name": "Malloc disk", 00:23:11.932 "block_size": 512, 00:23:11.932 "num_blocks": 65536, 00:23:11.932 "uuid": "1c81c634-e102-4844-9969-e57cf0ce6e37", 00:23:11.932 "assigned_rate_limits": { 00:23:11.932 "rw_ios_per_sec": 0, 00:23:11.932 "rw_mbytes_per_sec": 0, 00:23:11.932 "r_mbytes_per_sec": 0, 00:23:11.932 "w_mbytes_per_sec": 0 00:23:11.932 }, 00:23:11.932 "claimed": true, 00:23:11.932 "claim_type": "exclusive_write", 00:23:11.932 "zoned": false, 00:23:11.932 "supported_io_types": { 00:23:11.932 "read": true, 00:23:11.932 "write": true, 00:23:11.932 "unmap": true, 00:23:11.932 "write_zeroes": true, 00:23:11.932 "flush": true, 00:23:11.932 "reset": true, 00:23:11.932 "compare": false, 00:23:11.932 "compare_and_write": false, 00:23:11.932 "abort": true, 00:23:11.932 "nvme_admin": false, 00:23:11.932 "nvme_io": false 00:23:11.932 }, 00:23:11.932 "memory_domains": [ 00:23:11.932 { 00:23:11.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:11.932 "dma_device_type": 2 00:23:11.932 } 00:23:11.932 ], 00:23:11.932 "driver_specific": {} 00:23:11.932 } 00:23:11.932 ] 00:23:11.932 21:45:32 -- common/autotest_common.sh@905 -- # return 0 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:11.932 21:45:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:12.191 21:45:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.191 "name": "Existed_Raid", 00:23:12.191 "uuid": "7d848472-e3e8-491f-9865-a1da032eff65", 00:23:12.191 "strip_size_kb": 64, 00:23:12.191 "state": "configuring", 00:23:12.191 "raid_level": "raid5f", 00:23:12.191 "superblock": true, 00:23:12.191 "num_base_bdevs": 4, 00:23:12.191 "num_base_bdevs_discovered": 2, 00:23:12.191 "num_base_bdevs_operational": 4, 00:23:12.191 "base_bdevs_list": [ 00:23:12.191 { 00:23:12.191 "name": "BaseBdev1", 00:23:12.191 "uuid": "1be1463e-6f4e-41d2-94ee-385a7f40279d", 00:23:12.191 "is_configured": true, 00:23:12.191 "data_offset": 2048, 00:23:12.191 "data_size": 63488 00:23:12.191 }, 00:23:12.191 { 00:23:12.191 "name": "BaseBdev2", 00:23:12.191 "uuid": "1c81c634-e102-4844-9969-e57cf0ce6e37", 00:23:12.191 "is_configured": true, 00:23:12.191 "data_offset": 2048, 00:23:12.191 "data_size": 63488 00:23:12.191 }, 00:23:12.191 { 00:23:12.191 "name": "BaseBdev3", 00:23:12.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.191 "is_configured": false, 00:23:12.191 "data_offset": 0, 00:23:12.191 "data_size": 0 00:23:12.191 }, 00:23:12.191 { 00:23:12.192 "name": "BaseBdev4", 00:23:12.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:12.192 "is_configured": false, 00:23:12.192 "data_offset": 0, 00:23:12.192 "data_size": 0 00:23:12.192 } 00:23:12.192 ] 00:23:12.192 }' 00:23:12.192 21:45:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.192 21:45:32 -- common/autotest_common.sh@10 -- # set +x 00:23:12.451 21:45:32 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:12.710 [2024-12-06 21:45:33.073140] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:12.710 BaseBdev3 00:23:12.710 21:45:33 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:12.710 21:45:33 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:23:12.710 21:45:33 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:12.710 21:45:33 -- common/autotest_common.sh@899 -- # local i 00:23:12.710 21:45:33 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:12.710 21:45:33 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:12.710 21:45:33 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:12.970 21:45:33 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:13.229 [ 00:23:13.229 { 00:23:13.229 "name": "BaseBdev3", 00:23:13.229 "aliases": [ 00:23:13.229 "9e113271-c7b8-464e-8984-8c8820b7a5d6" 00:23:13.229 ], 00:23:13.229 "product_name": "Malloc disk", 00:23:13.229 "block_size": 512, 00:23:13.229 "num_blocks": 65536, 00:23:13.229 "uuid": "9e113271-c7b8-464e-8984-8c8820b7a5d6", 00:23:13.229 "assigned_rate_limits": { 00:23:13.229 "rw_ios_per_sec": 0, 00:23:13.229 "rw_mbytes_per_sec": 0, 00:23:13.229 "r_mbytes_per_sec": 0, 00:23:13.229 "w_mbytes_per_sec": 0 00:23:13.229 }, 00:23:13.229 "claimed": true, 00:23:13.229 "claim_type": "exclusive_write", 00:23:13.229 "zoned": false, 00:23:13.229 "supported_io_types": { 00:23:13.229 "read": true, 00:23:13.229 "write": true, 00:23:13.229 "unmap": true, 00:23:13.229 "write_zeroes": true, 00:23:13.229 "flush": true, 00:23:13.229 "reset": true, 00:23:13.229 "compare": false, 00:23:13.229 "compare_and_write": false, 00:23:13.229 "abort": true, 00:23:13.229 "nvme_admin": false, 00:23:13.229 "nvme_io": false 00:23:13.229 }, 00:23:13.229 "memory_domains": [ 00:23:13.229 { 00:23:13.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:13.229 "dma_device_type": 2 00:23:13.229 } 00:23:13.229 ], 00:23:13.229 "driver_specific": {} 00:23:13.229 } 00:23:13.229 ] 00:23:13.229 21:45:33 -- common/autotest_common.sh@905 -- # return 0 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.229 "name": "Existed_Raid", 00:23:13.229 "uuid": "7d848472-e3e8-491f-9865-a1da032eff65", 00:23:13.229 "strip_size_kb": 64, 00:23:13.229 "state": "configuring", 00:23:13.229 "raid_level": "raid5f", 00:23:13.229 "superblock": true, 00:23:13.229 "num_base_bdevs": 4, 00:23:13.229 "num_base_bdevs_discovered": 3, 00:23:13.229 "num_base_bdevs_operational": 4, 00:23:13.229 "base_bdevs_list": [ 00:23:13.229 { 00:23:13.229 "name": "BaseBdev1", 00:23:13.229 "uuid": "1be1463e-6f4e-41d2-94ee-385a7f40279d", 00:23:13.229 "is_configured": true, 00:23:13.229 "data_offset": 2048, 00:23:13.229 "data_size": 63488 00:23:13.229 }, 00:23:13.229 { 00:23:13.229 "name": "BaseBdev2", 00:23:13.229 "uuid": "1c81c634-e102-4844-9969-e57cf0ce6e37", 00:23:13.229 "is_configured": true, 00:23:13.229 "data_offset": 2048, 00:23:13.229 "data_size": 63488 00:23:13.229 }, 00:23:13.229 { 00:23:13.229 "name": "BaseBdev3", 00:23:13.229 "uuid": "9e113271-c7b8-464e-8984-8c8820b7a5d6", 00:23:13.229 "is_configured": true, 00:23:13.229 "data_offset": 2048, 00:23:13.229 "data_size": 63488 00:23:13.229 }, 00:23:13.229 { 00:23:13.229 "name": "BaseBdev4", 00:23:13.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:13.229 "is_configured": false, 00:23:13.229 "data_offset": 0, 00:23:13.229 "data_size": 0 00:23:13.229 } 00:23:13.229 ] 00:23:13.229 }' 00:23:13.229 21:45:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.229 21:45:33 -- common/autotest_common.sh@10 -- # set +x 00:23:13.488 21:45:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:13.748 [2024-12-06 21:45:34.208616] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:13.748 [2024-12-06 21:45:34.208904] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000007580 00:23:13.748 [2024-12-06 21:45:34.208921] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:13.748 [2024-12-06 21:45:34.209052] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:13.748 BaseBdev4 00:23:13.748 [2024-12-06 21:45:34.215703] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000007580 00:23:13.748 [2024-12-06 21:45:34.215736] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x516000007580 00:23:13.748 [2024-12-06 21:45:34.215949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:13.748 21:45:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:23:13.748 21:45:34 -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:23:13.748 21:45:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:23:13.748 21:45:34 -- common/autotest_common.sh@899 -- # local i 00:23:13.748 21:45:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:23:13.748 21:45:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:23:13.748 21:45:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:14.006 21:45:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:23:14.266 [ 00:23:14.266 { 00:23:14.266 "name": "BaseBdev4", 00:23:14.266 "aliases": [ 00:23:14.266 "ccd3aaa4-1f5d-4a98-9c6e-a505fcfdc803" 00:23:14.266 ], 00:23:14.266 "product_name": "Malloc disk", 00:23:14.266 "block_size": 512, 00:23:14.266 "num_blocks": 65536, 00:23:14.266 "uuid": "ccd3aaa4-1f5d-4a98-9c6e-a505fcfdc803", 00:23:14.266 "assigned_rate_limits": { 00:23:14.266 "rw_ios_per_sec": 0, 00:23:14.266 "rw_mbytes_per_sec": 0, 00:23:14.266 "r_mbytes_per_sec": 0, 00:23:14.266 "w_mbytes_per_sec": 0 00:23:14.266 }, 00:23:14.266 "claimed": true, 00:23:14.266 "claim_type": "exclusive_write", 00:23:14.266 "zoned": false, 00:23:14.266 "supported_io_types": { 00:23:14.266 "read": true, 00:23:14.266 "write": true, 00:23:14.266 "unmap": true, 00:23:14.266 "write_zeroes": true, 00:23:14.266 "flush": true, 00:23:14.266 "reset": true, 00:23:14.266 "compare": false, 00:23:14.266 "compare_and_write": false, 00:23:14.266 "abort": true, 00:23:14.266 "nvme_admin": false, 00:23:14.266 "nvme_io": false 00:23:14.266 }, 00:23:14.266 "memory_domains": [ 00:23:14.266 { 00:23:14.266 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:14.266 "dma_device_type": 2 00:23:14.266 } 00:23:14.266 ], 00:23:14.266 "driver_specific": {} 00:23:14.266 } 00:23:14.266 ] 00:23:14.266 21:45:34 -- common/autotest_common.sh@905 -- # return 0 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:14.266 21:45:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:14.526 21:45:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:14.526 "name": "Existed_Raid", 00:23:14.526 "uuid": "7d848472-e3e8-491f-9865-a1da032eff65", 00:23:14.526 "strip_size_kb": 64, 00:23:14.526 "state": "online", 00:23:14.526 "raid_level": "raid5f", 00:23:14.526 "superblock": true, 00:23:14.526 "num_base_bdevs": 4, 00:23:14.526 "num_base_bdevs_discovered": 4, 00:23:14.526 "num_base_bdevs_operational": 4, 00:23:14.526 "base_bdevs_list": [ 00:23:14.526 { 00:23:14.526 "name": "BaseBdev1", 00:23:14.526 "uuid": "1be1463e-6f4e-41d2-94ee-385a7f40279d", 00:23:14.526 "is_configured": true, 00:23:14.526 "data_offset": 2048, 00:23:14.526 "data_size": 63488 00:23:14.526 }, 00:23:14.526 { 00:23:14.526 "name": "BaseBdev2", 00:23:14.526 "uuid": "1c81c634-e102-4844-9969-e57cf0ce6e37", 00:23:14.526 "is_configured": true, 00:23:14.526 "data_offset": 2048, 00:23:14.526 "data_size": 63488 00:23:14.526 }, 00:23:14.526 { 00:23:14.526 "name": "BaseBdev3", 00:23:14.526 "uuid": "9e113271-c7b8-464e-8984-8c8820b7a5d6", 00:23:14.526 "is_configured": true, 00:23:14.526 "data_offset": 2048, 00:23:14.526 "data_size": 63488 00:23:14.526 }, 00:23:14.526 { 00:23:14.526 "name": "BaseBdev4", 00:23:14.526 "uuid": "ccd3aaa4-1f5d-4a98-9c6e-a505fcfdc803", 00:23:14.526 "is_configured": true, 00:23:14.526 "data_offset": 2048, 00:23:14.526 "data_size": 63488 00:23:14.526 } 00:23:14.526 ] 00:23:14.526 }' 00:23:14.526 21:45:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:14.526 21:45:34 -- common/autotest_common.sh@10 -- # set +x 00:23:14.785 21:45:35 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:15.043 [2024-12-06 21:45:35.378295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.043 21:45:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:15.304 21:45:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.304 "name": "Existed_Raid", 00:23:15.304 "uuid": "7d848472-e3e8-491f-9865-a1da032eff65", 00:23:15.304 "strip_size_kb": 64, 00:23:15.304 "state": "online", 00:23:15.304 "raid_level": "raid5f", 00:23:15.304 "superblock": true, 00:23:15.304 "num_base_bdevs": 4, 00:23:15.304 "num_base_bdevs_discovered": 3, 00:23:15.304 "num_base_bdevs_operational": 3, 00:23:15.304 "base_bdevs_list": [ 00:23:15.304 { 00:23:15.304 "name": null, 00:23:15.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:15.304 "is_configured": false, 00:23:15.304 "data_offset": 2048, 00:23:15.304 "data_size": 63488 00:23:15.304 }, 00:23:15.304 { 00:23:15.304 "name": "BaseBdev2", 00:23:15.304 "uuid": "1c81c634-e102-4844-9969-e57cf0ce6e37", 00:23:15.304 "is_configured": true, 00:23:15.304 "data_offset": 2048, 00:23:15.304 "data_size": 63488 00:23:15.304 }, 00:23:15.304 { 00:23:15.304 "name": "BaseBdev3", 00:23:15.304 "uuid": "9e113271-c7b8-464e-8984-8c8820b7a5d6", 00:23:15.304 "is_configured": true, 00:23:15.304 "data_offset": 2048, 00:23:15.304 "data_size": 63488 00:23:15.304 }, 00:23:15.304 { 00:23:15.304 "name": "BaseBdev4", 00:23:15.304 "uuid": "ccd3aaa4-1f5d-4a98-9c6e-a505fcfdc803", 00:23:15.304 "is_configured": true, 00:23:15.304 "data_offset": 2048, 00:23:15.304 "data_size": 63488 00:23:15.304 } 00:23:15.304 ] 00:23:15.304 }' 00:23:15.304 21:45:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.304 21:45:35 -- common/autotest_common.sh@10 -- # set +x 00:23:15.579 21:45:36 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:15.579 21:45:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:15.579 21:45:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:15.579 21:45:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.874 21:45:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:15.874 21:45:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:15.874 21:45:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:16.135 [2024-12-06 21:45:36.436178] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:16.135 [2024-12-06 21:45:36.436246] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:16.135 [2024-12-06 21:45:36.436308] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:16.135 21:45:36 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:16.135 21:45:36 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:16.135 21:45:36 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:16.135 21:45:36 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.393 21:45:36 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:16.393 21:45:36 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:16.393 21:45:36 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:16.652 [2024-12-06 21:45:36.943287] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:16.652 21:45:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:16.652 21:45:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:16.652 21:45:37 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.652 21:45:37 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:16.911 21:45:37 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:16.911 21:45:37 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:16.911 21:45:37 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:23:17.171 [2024-12-06 21:45:37.432965] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:23:17.171 [2024-12-06 21:45:37.433026] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000007580 name Existed_Raid, state offline 00:23:17.171 21:45:37 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:17.171 21:45:37 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:17.171 21:45:37 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.171 21:45:37 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:17.431 21:45:37 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:17.431 21:45:37 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:17.431 21:45:37 -- bdev/bdev_raid.sh@287 -- # killprocess 84763 00:23:17.431 21:45:37 -- common/autotest_common.sh@936 -- # '[' -z 84763 ']' 00:23:17.431 21:45:37 -- common/autotest_common.sh@940 -- # kill -0 84763 00:23:17.431 21:45:37 -- common/autotest_common.sh@941 -- # uname 00:23:17.431 21:45:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.431 21:45:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84763 00:23:17.431 21:45:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:17.431 killing process with pid 84763 00:23:17.431 21:45:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:17.431 21:45:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84763' 00:23:17.431 21:45:37 -- common/autotest_common.sh@955 -- # kill 84763 00:23:17.431 [2024-12-06 21:45:37.726075] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:17.431 21:45:37 -- common/autotest_common.sh@960 -- # wait 84763 00:23:17.431 [2024-12-06 21:45:37.726180] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:18.368 21:45:38 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:18.368 00:23:18.368 real 0m11.814s 00:23:18.368 user 0m19.912s 00:23:18.368 sys 0m1.710s 00:23:18.368 21:45:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:18.368 21:45:38 -- common/autotest_common.sh@10 -- # set +x 00:23:18.368 ************************************ 00:23:18.368 END TEST raid5f_state_function_test_sb 00:23:18.368 ************************************ 00:23:18.368 21:45:38 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:23:18.368 21:45:38 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:23:18.369 21:45:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:18.369 21:45:38 -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 ************************************ 00:23:18.369 START TEST raid5f_superblock_test 00:23:18.369 ************************************ 00:23:18.369 21:45:38 -- common/autotest_common.sh@1114 -- # raid_superblock_test raid5f 4 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@357 -- # raid_pid=85160 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@358 -- # waitforlisten 85160 /var/tmp/spdk-raid.sock 00:23:18.369 21:45:38 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:18.369 21:45:38 -- common/autotest_common.sh@829 -- # '[' -z 85160 ']' 00:23:18.369 21:45:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:18.369 21:45:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:18.369 21:45:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:18.369 21:45:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.369 21:45:38 -- common/autotest_common.sh@10 -- # set +x 00:23:18.369 [2024-12-06 21:45:38.778827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:18.369 [2024-12-06 21:45:38.778978] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85160 ] 00:23:18.628 [2024-12-06 21:45:38.949738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.628 [2024-12-06 21:45:39.107015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.887 [2024-12-06 21:45:39.254761] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:19.455 21:45:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:19.455 21:45:39 -- common/autotest_common.sh@862 -- # return 0 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:19.455 21:45:39 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:19.714 malloc1 00:23:19.714 21:45:39 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:19.714 [2024-12-06 21:45:40.186020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:19.714 [2024-12-06 21:45:40.186108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.714 [2024-12-06 21:45:40.186145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:23:19.714 [2024-12-06 21:45:40.186158] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.714 [2024-12-06 21:45:40.188437] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.714 [2024-12-06 21:45:40.188516] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:19.714 pt1 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:19.714 21:45:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:19.973 malloc2 00:23:19.973 21:45:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:20.232 [2024-12-06 21:45:40.596812] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:20.232 [2024-12-06 21:45:40.596885] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.232 [2024-12-06 21:45:40.596914] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:23:20.232 [2024-12-06 21:45:40.596927] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.232 [2024-12-06 21:45:40.599100] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.232 [2024-12-06 21:45:40.599137] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:20.232 pt2 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:20.232 21:45:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:20.491 malloc3 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:20.491 [2024-12-06 21:45:40.969217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:20.491 [2024-12-06 21:45:40.969287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.491 [2024-12-06 21:45:40.969315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:23:20.491 [2024-12-06 21:45:40.969327] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.491 [2024-12-06 21:45:40.971446] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.491 [2024-12-06 21:45:40.971510] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:20.491 pt3 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:20.491 21:45:40 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:23:20.750 malloc4 00:23:20.750 21:45:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:21.009 [2024-12-06 21:45:41.397534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:21.009 [2024-12-06 21:45:41.397606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:21.009 [2024-12-06 21:45:41.397638] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:23:21.009 [2024-12-06 21:45:41.397650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:21.009 [2024-12-06 21:45:41.399708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:21.009 [2024-12-06 21:45:41.399748] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:21.009 pt4 00:23:21.009 21:45:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:21.009 21:45:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:21.009 21:45:41 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:23:21.268 [2024-12-06 21:45:41.585675] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:21.268 [2024-12-06 21:45:41.587624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:21.268 [2024-12-06 21:45:41.587761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:21.268 [2024-12-06 21:45:41.587891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:21.268 [2024-12-06 21:45:41.588132] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000009380 00:23:21.268 [2024-12-06 21:45:41.588164] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:21.268 [2024-12-06 21:45:41.588344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005790 00:23:21.268 [2024-12-06 21:45:41.594572] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000009380 00:23:21.268 [2024-12-06 21:45:41.594606] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000009380 00:23:21.268 [2024-12-06 21:45:41.594851] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.268 21:45:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:21.526 21:45:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.526 "name": "raid_bdev1", 00:23:21.526 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:21.526 "strip_size_kb": 64, 00:23:21.526 "state": "online", 00:23:21.526 "raid_level": "raid5f", 00:23:21.526 "superblock": true, 00:23:21.526 "num_base_bdevs": 4, 00:23:21.526 "num_base_bdevs_discovered": 4, 00:23:21.526 "num_base_bdevs_operational": 4, 00:23:21.526 "base_bdevs_list": [ 00:23:21.526 { 00:23:21.526 "name": "pt1", 00:23:21.526 "uuid": "ae203b5a-ef01-51ac-bb66-dee14f91ddd3", 00:23:21.526 "is_configured": true, 00:23:21.526 "data_offset": 2048, 00:23:21.526 "data_size": 63488 00:23:21.526 }, 00:23:21.526 { 00:23:21.526 "name": "pt2", 00:23:21.526 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:21.526 "is_configured": true, 00:23:21.526 "data_offset": 2048, 00:23:21.526 "data_size": 63488 00:23:21.526 }, 00:23:21.526 { 00:23:21.526 "name": "pt3", 00:23:21.526 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:21.526 "is_configured": true, 00:23:21.526 "data_offset": 2048, 00:23:21.526 "data_size": 63488 00:23:21.527 }, 00:23:21.527 { 00:23:21.527 "name": "pt4", 00:23:21.527 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:21.527 "is_configured": true, 00:23:21.527 "data_offset": 2048, 00:23:21.527 "data_size": 63488 00:23:21.527 } 00:23:21.527 ] 00:23:21.527 }' 00:23:21.527 21:45:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.527 21:45:41 -- common/autotest_common.sh@10 -- # set +x 00:23:21.784 21:45:42 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:21.784 21:45:42 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:22.042 [2024-12-06 21:45:42.321042] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:22.042 21:45:42 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=85d3bb64-8543-4953-a224-a85d69b9a101 00:23:22.042 21:45:42 -- bdev/bdev_raid.sh@380 -- # '[' -z 85d3bb64-8543-4953-a224-a85d69b9a101 ']' 00:23:22.042 21:45:42 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:22.042 [2024-12-06 21:45:42.500898] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.042 [2024-12-06 21:45:42.500931] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:22.042 [2024-12-06 21:45:42.501007] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:22.042 [2024-12-06 21:45:42.501093] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:22.042 [2024-12-06 21:45:42.501107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009380 name raid_bdev1, state offline 00:23:22.042 21:45:42 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.042 21:45:42 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:22.300 21:45:42 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:22.300 21:45:42 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:22.300 21:45:42 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.300 21:45:42 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:22.558 21:45:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.558 21:45:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:22.817 21:45:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:22.817 21:45:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:23.079 21:45:43 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:23.079 21:45:43 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:23.079 21:45:43 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:23.079 21:45:43 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:23.337 21:45:43 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:23.337 21:45:43 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:23.337 21:45:43 -- common/autotest_common.sh@650 -- # local es=0 00:23:23.337 21:45:43 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:23.337 21:45:43 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.337 21:45:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.337 21:45:43 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.337 21:45:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.337 21:45:43 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.337 21:45:43 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:23.337 21:45:43 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:23.337 21:45:43 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:23.337 21:45:43 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:23:23.596 [2024-12-06 21:45:43.965205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:23.596 [2024-12-06 21:45:43.967316] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:23.596 [2024-12-06 21:45:43.967375] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:23.596 [2024-12-06 21:45:43.967414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:23:23.596 [2024-12-06 21:45:43.967470] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:23.596 [2024-12-06 21:45:43.967555] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:23.596 [2024-12-06 21:45:43.967586] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:23.596 [2024-12-06 21:45:43.967624] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:23:23.596 [2024-12-06 21:45:43.967643] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:23.596 [2024-12-06 21:45:43.967654] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000009980 name raid_bdev1, state configuring 00:23:23.596 request: 00:23:23.596 { 00:23:23.596 "name": "raid_bdev1", 00:23:23.596 "raid_level": "raid5f", 00:23:23.596 "base_bdevs": [ 00:23:23.596 "malloc1", 00:23:23.596 "malloc2", 00:23:23.596 "malloc3", 00:23:23.596 "malloc4" 00:23:23.596 ], 00:23:23.596 "superblock": false, 00:23:23.596 "strip_size_kb": 64, 00:23:23.596 "method": "bdev_raid_create", 00:23:23.596 "req_id": 1 00:23:23.596 } 00:23:23.596 Got JSON-RPC error response 00:23:23.596 response: 00:23:23.596 { 00:23:23.596 "code": -17, 00:23:23.596 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:23.596 } 00:23:23.596 21:45:43 -- common/autotest_common.sh@653 -- # es=1 00:23:23.596 21:45:43 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:23.596 21:45:43 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:23.596 21:45:43 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:23.596 21:45:43 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.596 21:45:43 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:23.854 21:45:44 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:23.854 21:45:44 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:23.854 21:45:44 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:24.113 [2024-12-06 21:45:44.393245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:24.113 [2024-12-06 21:45:44.393309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.113 [2024-12-06 21:45:44.393339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:23:24.113 [2024-12-06 21:45:44.393350] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.113 [2024-12-06 21:45:44.395577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.113 [2024-12-06 21:45:44.395615] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:24.113 [2024-12-06 21:45:44.395716] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:24.113 [2024-12-06 21:45:44.395771] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:24.113 pt1 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:24.113 21:45:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:24.371 21:45:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:24.371 "name": "raid_bdev1", 00:23:24.371 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:24.371 "strip_size_kb": 64, 00:23:24.371 "state": "configuring", 00:23:24.371 "raid_level": "raid5f", 00:23:24.371 "superblock": true, 00:23:24.371 "num_base_bdevs": 4, 00:23:24.371 "num_base_bdevs_discovered": 1, 00:23:24.371 "num_base_bdevs_operational": 4, 00:23:24.371 "base_bdevs_list": [ 00:23:24.371 { 00:23:24.371 "name": "pt1", 00:23:24.371 "uuid": "ae203b5a-ef01-51ac-bb66-dee14f91ddd3", 00:23:24.371 "is_configured": true, 00:23:24.371 "data_offset": 2048, 00:23:24.371 "data_size": 63488 00:23:24.371 }, 00:23:24.371 { 00:23:24.371 "name": null, 00:23:24.371 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:24.371 "is_configured": false, 00:23:24.371 "data_offset": 2048, 00:23:24.371 "data_size": 63488 00:23:24.371 }, 00:23:24.371 { 00:23:24.371 "name": null, 00:23:24.371 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:24.371 "is_configured": false, 00:23:24.371 "data_offset": 2048, 00:23:24.371 "data_size": 63488 00:23:24.371 }, 00:23:24.371 { 00:23:24.371 "name": null, 00:23:24.371 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:24.371 "is_configured": false, 00:23:24.371 "data_offset": 2048, 00:23:24.371 "data_size": 63488 00:23:24.371 } 00:23:24.371 ] 00:23:24.371 }' 00:23:24.371 21:45:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:24.371 21:45:44 -- common/autotest_common.sh@10 -- # set +x 00:23:24.629 21:45:44 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:23:24.629 21:45:44 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:24.886 [2024-12-06 21:45:45.133395] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:24.886 [2024-12-06 21:45:45.133499] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.886 [2024-12-06 21:45:45.133535] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000a880 00:23:24.886 [2024-12-06 21:45:45.133549] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.886 [2024-12-06 21:45:45.134090] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.886 [2024-12-06 21:45:45.134120] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:24.886 [2024-12-06 21:45:45.134211] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:24.886 [2024-12-06 21:45:45.134236] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:24.886 pt2 00:23:24.886 21:45:45 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:24.886 [2024-12-06 21:45:45.373470] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:25.144 21:45:45 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:25.144 21:45:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:25.144 21:45:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:25.144 21:45:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:25.145 "name": "raid_bdev1", 00:23:25.145 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:25.145 "strip_size_kb": 64, 00:23:25.145 "state": "configuring", 00:23:25.145 "raid_level": "raid5f", 00:23:25.145 "superblock": true, 00:23:25.145 "num_base_bdevs": 4, 00:23:25.145 "num_base_bdevs_discovered": 1, 00:23:25.145 "num_base_bdevs_operational": 4, 00:23:25.145 "base_bdevs_list": [ 00:23:25.145 { 00:23:25.145 "name": "pt1", 00:23:25.145 "uuid": "ae203b5a-ef01-51ac-bb66-dee14f91ddd3", 00:23:25.145 "is_configured": true, 00:23:25.145 "data_offset": 2048, 00:23:25.145 "data_size": 63488 00:23:25.145 }, 00:23:25.145 { 00:23:25.145 "name": null, 00:23:25.145 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:25.145 "is_configured": false, 00:23:25.145 "data_offset": 2048, 00:23:25.145 "data_size": 63488 00:23:25.145 }, 00:23:25.145 { 00:23:25.145 "name": null, 00:23:25.145 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:25.145 "is_configured": false, 00:23:25.145 "data_offset": 2048, 00:23:25.145 "data_size": 63488 00:23:25.145 }, 00:23:25.145 { 00:23:25.145 "name": null, 00:23:25.145 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:25.145 "is_configured": false, 00:23:25.145 "data_offset": 2048, 00:23:25.145 "data_size": 63488 00:23:25.145 } 00:23:25.145 ] 00:23:25.145 }' 00:23:25.145 21:45:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:25.145 21:45:45 -- common/autotest_common.sh@10 -- # set +x 00:23:25.403 21:45:45 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:25.403 21:45:45 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:25.403 21:45:45 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:25.662 [2024-12-06 21:45:46.049575] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:25.662 [2024-12-06 21:45:46.049640] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.662 [2024-12-06 21:45:46.049665] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ab80 00:23:25.662 [2024-12-06 21:45:46.049679] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.662 [2024-12-06 21:45:46.050086] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.662 [2024-12-06 21:45:46.050111] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:25.662 [2024-12-06 21:45:46.050193] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:25.662 [2024-12-06 21:45:46.050224] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:25.662 pt2 00:23:25.662 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:25.662 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:25.662 21:45:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:25.920 [2024-12-06 21:45:46.297642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:25.920 [2024-12-06 21:45:46.297698] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:25.920 [2024-12-06 21:45:46.297721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000ae80 00:23:25.920 [2024-12-06 21:45:46.297735] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:25.920 [2024-12-06 21:45:46.298103] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:25.920 [2024-12-06 21:45:46.298129] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:25.920 [2024-12-06 21:45:46.298205] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:25.920 [2024-12-06 21:45:46.298241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:25.920 pt3 00:23:25.920 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:25.920 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:25.920 21:45:46 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:26.179 [2024-12-06 21:45:46.485703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:26.179 [2024-12-06 21:45:46.485760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:26.179 [2024-12-06 21:45:46.485793] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b180 00:23:26.179 [2024-12-06 21:45:46.485811] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:26.179 [2024-12-06 21:45:46.486204] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:26.179 [2024-12-06 21:45:46.486236] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:26.179 [2024-12-06 21:45:46.486315] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:26.179 [2024-12-06 21:45:46.486346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:26.179 [2024-12-06 21:45:46.486537] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:23:26.179 [2024-12-06 21:45:46.486566] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:26.179 [2024-12-06 21:45:46.486646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:26.179 [2024-12-06 21:45:46.491859] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:23:26.179 [2024-12-06 21:45:46.491881] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:23:26.179 [2024-12-06 21:45:46.492062] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:26.179 pt4 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.179 21:45:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:26.439 21:45:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:26.439 "name": "raid_bdev1", 00:23:26.439 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:26.439 "strip_size_kb": 64, 00:23:26.439 "state": "online", 00:23:26.439 "raid_level": "raid5f", 00:23:26.439 "superblock": true, 00:23:26.439 "num_base_bdevs": 4, 00:23:26.439 "num_base_bdevs_discovered": 4, 00:23:26.439 "num_base_bdevs_operational": 4, 00:23:26.439 "base_bdevs_list": [ 00:23:26.439 { 00:23:26.439 "name": "pt1", 00:23:26.439 "uuid": "ae203b5a-ef01-51ac-bb66-dee14f91ddd3", 00:23:26.439 "is_configured": true, 00:23:26.439 "data_offset": 2048, 00:23:26.439 "data_size": 63488 00:23:26.439 }, 00:23:26.439 { 00:23:26.439 "name": "pt2", 00:23:26.439 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:26.439 "is_configured": true, 00:23:26.439 "data_offset": 2048, 00:23:26.439 "data_size": 63488 00:23:26.439 }, 00:23:26.439 { 00:23:26.439 "name": "pt3", 00:23:26.439 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:26.439 "is_configured": true, 00:23:26.439 "data_offset": 2048, 00:23:26.440 "data_size": 63488 00:23:26.440 }, 00:23:26.440 { 00:23:26.440 "name": "pt4", 00:23:26.440 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:26.440 "is_configured": true, 00:23:26.440 "data_offset": 2048, 00:23:26.440 "data_size": 63488 00:23:26.440 } 00:23:26.440 ] 00:23:26.440 }' 00:23:26.440 21:45:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:26.440 21:45:46 -- common/autotest_common.sh@10 -- # set +x 00:23:26.700 21:45:46 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:26.700 21:45:46 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:26.700 [2024-12-06 21:45:47.122318] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:26.700 21:45:47 -- bdev/bdev_raid.sh@430 -- # '[' 85d3bb64-8543-4953-a224-a85d69b9a101 '!=' 85d3bb64-8543-4953-a224-a85d69b9a101 ']' 00:23:26.700 21:45:47 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:26.700 21:45:47 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:26.700 21:45:47 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:26.700 21:45:47 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:26.959 [2024-12-06 21:45:47.298273] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:26.959 21:45:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:27.218 21:45:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:27.218 "name": "raid_bdev1", 00:23:27.218 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:27.218 "strip_size_kb": 64, 00:23:27.218 "state": "online", 00:23:27.218 "raid_level": "raid5f", 00:23:27.218 "superblock": true, 00:23:27.218 "num_base_bdevs": 4, 00:23:27.218 "num_base_bdevs_discovered": 3, 00:23:27.218 "num_base_bdevs_operational": 3, 00:23:27.218 "base_bdevs_list": [ 00:23:27.218 { 00:23:27.218 "name": null, 00:23:27.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:27.218 "is_configured": false, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 }, 00:23:27.218 { 00:23:27.218 "name": "pt2", 00:23:27.218 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:27.218 "is_configured": true, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 }, 00:23:27.218 { 00:23:27.218 "name": "pt3", 00:23:27.218 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:27.218 "is_configured": true, 00:23:27.218 "data_offset": 2048, 00:23:27.218 "data_size": 63488 00:23:27.218 }, 00:23:27.219 { 00:23:27.219 "name": "pt4", 00:23:27.219 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:27.219 "is_configured": true, 00:23:27.219 "data_offset": 2048, 00:23:27.219 "data_size": 63488 00:23:27.219 } 00:23:27.219 ] 00:23:27.219 }' 00:23:27.219 21:45:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:27.219 21:45:47 -- common/autotest_common.sh@10 -- # set +x 00:23:27.477 21:45:47 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:27.736 [2024-12-06 21:45:47.986398] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:27.736 [2024-12-06 21:45:47.986432] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:27.736 [2024-12-06 21:45:47.986519] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:27.736 [2024-12-06 21:45:47.986614] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:27.736 [2024-12-06 21:45:47.986628] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:23:27.736 21:45:48 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:27.736 21:45:48 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:27.995 21:45:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:28.255 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:28.255 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:28.255 21:45:48 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:28.514 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:28.514 21:45:48 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:28.514 21:45:48 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:28.514 21:45:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:28.514 21:45:48 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:28.773 [2024-12-06 21:45:49.090576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:28.773 [2024-12-06 21:45:49.090655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.773 [2024-12-06 21:45:49.090685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:23:28.773 [2024-12-06 21:45:49.090696] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.773 [2024-12-06 21:45:49.093032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.773 [2024-12-06 21:45:49.093189] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:28.773 [2024-12-06 21:45:49.093427] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:28.773 [2024-12-06 21:45:49.093501] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:28.773 pt2 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.773 21:45:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.033 21:45:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.033 "name": "raid_bdev1", 00:23:29.033 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:29.033 "strip_size_kb": 64, 00:23:29.033 "state": "configuring", 00:23:29.033 "raid_level": "raid5f", 00:23:29.033 "superblock": true, 00:23:29.033 "num_base_bdevs": 4, 00:23:29.033 "num_base_bdevs_discovered": 1, 00:23:29.033 "num_base_bdevs_operational": 3, 00:23:29.033 "base_bdevs_list": [ 00:23:29.033 { 00:23:29.033 "name": null, 00:23:29.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.033 "is_configured": false, 00:23:29.033 "data_offset": 2048, 00:23:29.033 "data_size": 63488 00:23:29.033 }, 00:23:29.033 { 00:23:29.033 "name": "pt2", 00:23:29.033 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:29.033 "is_configured": true, 00:23:29.033 "data_offset": 2048, 00:23:29.033 "data_size": 63488 00:23:29.033 }, 00:23:29.033 { 00:23:29.033 "name": null, 00:23:29.033 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:29.033 "is_configured": false, 00:23:29.033 "data_offset": 2048, 00:23:29.033 "data_size": 63488 00:23:29.033 }, 00:23:29.033 { 00:23:29.033 "name": null, 00:23:29.033 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:29.033 "is_configured": false, 00:23:29.033 "data_offset": 2048, 00:23:29.033 "data_size": 63488 00:23:29.033 } 00:23:29.033 ] 00:23:29.033 }' 00:23:29.033 21:45:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.033 21:45:49 -- common/autotest_common.sh@10 -- # set +x 00:23:29.292 21:45:49 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:29.292 21:45:49 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:29.292 21:45:49 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:29.551 [2024-12-06 21:45:49.894776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:29.551 [2024-12-06 21:45:49.894837] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:29.551 [2024-12-06 21:45:49.894866] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:23:29.551 [2024-12-06 21:45:49.894878] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:29.551 [2024-12-06 21:45:49.895353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:29.551 [2024-12-06 21:45:49.895406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:29.551 [2024-12-06 21:45:49.895533] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:29.551 [2024-12-06 21:45:49.895567] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:29.551 pt3 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.551 21:45:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:29.810 21:45:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:29.810 "name": "raid_bdev1", 00:23:29.810 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:29.810 "strip_size_kb": 64, 00:23:29.810 "state": "configuring", 00:23:29.810 "raid_level": "raid5f", 00:23:29.810 "superblock": true, 00:23:29.810 "num_base_bdevs": 4, 00:23:29.810 "num_base_bdevs_discovered": 2, 00:23:29.810 "num_base_bdevs_operational": 3, 00:23:29.810 "base_bdevs_list": [ 00:23:29.810 { 00:23:29.810 "name": null, 00:23:29.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:29.810 "is_configured": false, 00:23:29.810 "data_offset": 2048, 00:23:29.810 "data_size": 63488 00:23:29.810 }, 00:23:29.810 { 00:23:29.810 "name": "pt2", 00:23:29.810 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:29.810 "is_configured": true, 00:23:29.810 "data_offset": 2048, 00:23:29.810 "data_size": 63488 00:23:29.810 }, 00:23:29.810 { 00:23:29.810 "name": "pt3", 00:23:29.810 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:29.810 "is_configured": true, 00:23:29.810 "data_offset": 2048, 00:23:29.810 "data_size": 63488 00:23:29.810 }, 00:23:29.810 { 00:23:29.810 "name": null, 00:23:29.810 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:29.810 "is_configured": false, 00:23:29.810 "data_offset": 2048, 00:23:29.810 "data_size": 63488 00:23:29.810 } 00:23:29.810 ] 00:23:29.810 }' 00:23:29.810 21:45:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:29.810 21:45:50 -- common/autotest_common.sh@10 -- # set +x 00:23:30.070 21:45:50 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:30.070 21:45:50 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:30.070 21:45:50 -- bdev/bdev_raid.sh@462 -- # i=3 00:23:30.070 21:45:50 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:30.329 [2024-12-06 21:45:50.639146] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:30.329 [2024-12-06 21:45:50.639382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:30.329 [2024-12-06 21:45:50.639431] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c080 00:23:30.329 [2024-12-06 21:45:50.639474] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:30.329 [2024-12-06 21:45:50.640006] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:30.329 [2024-12-06 21:45:50.640035] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:30.329 [2024-12-06 21:45:50.640161] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:30.329 [2024-12-06 21:45:50.640235] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:30.329 [2024-12-06 21:45:50.640383] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000ba80 00:23:30.329 [2024-12-06 21:45:50.640398] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:30.329 [2024-12-06 21:45:50.640527] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005930 00:23:30.329 [2024-12-06 21:45:50.645847] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000ba80 00:23:30.329 [2024-12-06 21:45:50.645996] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000ba80 00:23:30.329 [2024-12-06 21:45:50.646292] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.329 pt4 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:30.329 21:45:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:30.589 21:45:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:30.589 "name": "raid_bdev1", 00:23:30.589 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:30.589 "strip_size_kb": 64, 00:23:30.589 "state": "online", 00:23:30.589 "raid_level": "raid5f", 00:23:30.589 "superblock": true, 00:23:30.589 "num_base_bdevs": 4, 00:23:30.589 "num_base_bdevs_discovered": 3, 00:23:30.589 "num_base_bdevs_operational": 3, 00:23:30.589 "base_bdevs_list": [ 00:23:30.589 { 00:23:30.589 "name": null, 00:23:30.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:30.589 "is_configured": false, 00:23:30.589 "data_offset": 2048, 00:23:30.589 "data_size": 63488 00:23:30.589 }, 00:23:30.589 { 00:23:30.589 "name": "pt2", 00:23:30.589 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:30.589 "is_configured": true, 00:23:30.589 "data_offset": 2048, 00:23:30.589 "data_size": 63488 00:23:30.589 }, 00:23:30.589 { 00:23:30.589 "name": "pt3", 00:23:30.589 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:30.589 "is_configured": true, 00:23:30.589 "data_offset": 2048, 00:23:30.589 "data_size": 63488 00:23:30.589 }, 00:23:30.589 { 00:23:30.589 "name": "pt4", 00:23:30.589 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:30.589 "is_configured": true, 00:23:30.589 "data_offset": 2048, 00:23:30.589 "data_size": 63488 00:23:30.589 } 00:23:30.589 ] 00:23:30.589 }' 00:23:30.589 21:45:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:30.589 21:45:50 -- common/autotest_common.sh@10 -- # set +x 00:23:30.848 21:45:51 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:23:30.848 21:45:51 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:30.848 [2024-12-06 21:45:51.332147] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:30.848 [2024-12-06 21:45:51.332201] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:30.848 [2024-12-06 21:45:51.332280] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:30.848 [2024-12-06 21:45:51.332354] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:30.848 [2024-12-06 21:45:51.332374] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state offline 00:23:31.108 21:45:51 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.108 21:45:51 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:31.108 21:45:51 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:31.108 21:45:51 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:31.108 21:45:51 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:31.368 [2024-12-06 21:45:51.708238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:31.368 [2024-12-06 21:45:51.708327] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:31.368 [2024-12-06 21:45:51.708356] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:23:31.368 [2024-12-06 21:45:51.708371] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:31.368 [2024-12-06 21:45:51.710733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:31.368 [2024-12-06 21:45:51.710809] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:31.368 [2024-12-06 21:45:51.710897] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:31.368 [2024-12-06 21:45:51.710961] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:31.368 pt1 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.368 21:45:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.629 21:45:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.629 "name": "raid_bdev1", 00:23:31.629 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:31.629 "strip_size_kb": 64, 00:23:31.629 "state": "configuring", 00:23:31.629 "raid_level": "raid5f", 00:23:31.629 "superblock": true, 00:23:31.629 "num_base_bdevs": 4, 00:23:31.629 "num_base_bdevs_discovered": 1, 00:23:31.629 "num_base_bdevs_operational": 4, 00:23:31.629 "base_bdevs_list": [ 00:23:31.629 { 00:23:31.629 "name": "pt1", 00:23:31.629 "uuid": "ae203b5a-ef01-51ac-bb66-dee14f91ddd3", 00:23:31.629 "is_configured": true, 00:23:31.629 "data_offset": 2048, 00:23:31.629 "data_size": 63488 00:23:31.629 }, 00:23:31.629 { 00:23:31.629 "name": null, 00:23:31.629 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:31.629 "is_configured": false, 00:23:31.629 "data_offset": 2048, 00:23:31.629 "data_size": 63488 00:23:31.629 }, 00:23:31.629 { 00:23:31.629 "name": null, 00:23:31.629 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:31.629 "is_configured": false, 00:23:31.629 "data_offset": 2048, 00:23:31.629 "data_size": 63488 00:23:31.629 }, 00:23:31.629 { 00:23:31.629 "name": null, 00:23:31.629 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:31.629 "is_configured": false, 00:23:31.629 "data_offset": 2048, 00:23:31.629 "data_size": 63488 00:23:31.629 } 00:23:31.629 ] 00:23:31.629 }' 00:23:31.629 21:45:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.629 21:45:51 -- common/autotest_common.sh@10 -- # set +x 00:23:31.889 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:31.889 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:31.889 21:45:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:32.148 21:45:52 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:23:32.407 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:32.407 21:45:52 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:32.407 21:45:52 -- bdev/bdev_raid.sh@489 -- # i=3 00:23:32.407 21:45:52 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:23:32.667 [2024-12-06 21:45:53.004577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:23:32.667 [2024-12-06 21:45:53.004641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:32.667 [2024-12-06 21:45:53.004666] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000cc80 00:23:32.667 [2024-12-06 21:45:53.004678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:32.667 [2024-12-06 21:45:53.005071] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:32.667 [2024-12-06 21:45:53.005103] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:23:32.667 [2024-12-06 21:45:53.005202] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:23:32.667 [2024-12-06 21:45:53.005257] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:32.667 [2024-12-06 21:45:53.005268] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:32.667 [2024-12-06 21:45:53.005293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c980 name raid_bdev1, state configuring 00:23:32.667 [2024-12-06 21:45:53.005358] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:23:32.667 pt4 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:32.667 21:45:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:32.926 21:45:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:32.926 "name": "raid_bdev1", 00:23:32.926 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:32.926 "strip_size_kb": 64, 00:23:32.926 "state": "configuring", 00:23:32.926 "raid_level": "raid5f", 00:23:32.926 "superblock": true, 00:23:32.926 "num_base_bdevs": 4, 00:23:32.926 "num_base_bdevs_discovered": 1, 00:23:32.926 "num_base_bdevs_operational": 3, 00:23:32.926 "base_bdevs_list": [ 00:23:32.926 { 00:23:32.926 "name": null, 00:23:32.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:32.926 "is_configured": false, 00:23:32.926 "data_offset": 2048, 00:23:32.926 "data_size": 63488 00:23:32.926 }, 00:23:32.926 { 00:23:32.926 "name": null, 00:23:32.926 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:32.926 "is_configured": false, 00:23:32.926 "data_offset": 2048, 00:23:32.926 "data_size": 63488 00:23:32.926 }, 00:23:32.926 { 00:23:32.926 "name": null, 00:23:32.926 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:32.926 "is_configured": false, 00:23:32.926 "data_offset": 2048, 00:23:32.926 "data_size": 63488 00:23:32.926 }, 00:23:32.926 { 00:23:32.926 "name": "pt4", 00:23:32.926 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:32.926 "is_configured": true, 00:23:32.926 "data_offset": 2048, 00:23:32.926 "data_size": 63488 00:23:32.926 } 00:23:32.926 ] 00:23:32.926 }' 00:23:32.926 21:45:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:32.926 21:45:53 -- common/autotest_common.sh@10 -- # set +x 00:23:33.185 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:33.185 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:33.185 21:45:53 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:33.445 [2024-12-06 21:45:53.700824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:33.445 [2024-12-06 21:45:53.701112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.445 [2024-12-06 21:45:53.701158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d280 00:23:33.445 [2024-12-06 21:45:53.701172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.445 [2024-12-06 21:45:53.701739] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.445 [2024-12-06 21:45:53.701772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:33.445 [2024-12-06 21:45:53.701949] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:33.445 [2024-12-06 21:45:53.701983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:33.445 pt2 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:33.445 [2024-12-06 21:45:53.904880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:33.445 [2024-12-06 21:45:53.904940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:33.445 [2024-12-06 21:45:53.904978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000d580 00:23:33.445 [2024-12-06 21:45:53.904990] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:33.445 [2024-12-06 21:45:53.905438] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:33.445 [2024-12-06 21:45:53.905488] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:33.445 [2024-12-06 21:45:53.905604] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:33.445 [2024-12-06 21:45:53.905631] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:33.445 [2024-12-06 21:45:53.905801] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000cf80 00:23:33.445 [2024-12-06 21:45:53.905815] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:23:33.445 [2024-12-06 21:45:53.905938] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:33.445 [2024-12-06 21:45:53.911161] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000cf80 00:23:33.445 pt3 00:23:33.445 [2024-12-06 21:45:53.911312] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000cf80 00:23:33.445 [2024-12-06 21:45:53.911626] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.445 21:45:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.704 21:45:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.704 "name": "raid_bdev1", 00:23:33.704 "uuid": "85d3bb64-8543-4953-a224-a85d69b9a101", 00:23:33.704 "strip_size_kb": 64, 00:23:33.704 "state": "online", 00:23:33.704 "raid_level": "raid5f", 00:23:33.704 "superblock": true, 00:23:33.704 "num_base_bdevs": 4, 00:23:33.704 "num_base_bdevs_discovered": 3, 00:23:33.704 "num_base_bdevs_operational": 3, 00:23:33.704 "base_bdevs_list": [ 00:23:33.704 { 00:23:33.704 "name": null, 00:23:33.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.704 "is_configured": false, 00:23:33.704 "data_offset": 2048, 00:23:33.704 "data_size": 63488 00:23:33.704 }, 00:23:33.704 { 00:23:33.704 "name": "pt2", 00:23:33.704 "uuid": "d37824c1-a718-563c-a775-e4db364a95f4", 00:23:33.704 "is_configured": true, 00:23:33.704 "data_offset": 2048, 00:23:33.704 "data_size": 63488 00:23:33.704 }, 00:23:33.704 { 00:23:33.704 "name": "pt3", 00:23:33.704 "uuid": "03bb057a-98be-5800-94ad-602fe30f51f4", 00:23:33.704 "is_configured": true, 00:23:33.704 "data_offset": 2048, 00:23:33.704 "data_size": 63488 00:23:33.704 }, 00:23:33.704 { 00:23:33.704 "name": "pt4", 00:23:33.704 "uuid": "7656e6f6-3ef0-5c34-a197-3bc27a3862c0", 00:23:33.704 "is_configured": true, 00:23:33.704 "data_offset": 2048, 00:23:33.704 "data_size": 63488 00:23:33.704 } 00:23:33.704 ] 00:23:33.704 }' 00:23:33.704 21:45:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.704 21:45:54 -- common/autotest_common.sh@10 -- # set +x 00:23:33.963 21:45:54 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:33.963 21:45:54 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:34.222 [2024-12-06 21:45:54.601704] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:34.222 21:45:54 -- bdev/bdev_raid.sh@506 -- # '[' 85d3bb64-8543-4953-a224-a85d69b9a101 '!=' 85d3bb64-8543-4953-a224-a85d69b9a101 ']' 00:23:34.223 21:45:54 -- bdev/bdev_raid.sh@511 -- # killprocess 85160 00:23:34.223 21:45:54 -- common/autotest_common.sh@936 -- # '[' -z 85160 ']' 00:23:34.223 21:45:54 -- common/autotest_common.sh@940 -- # kill -0 85160 00:23:34.223 21:45:54 -- common/autotest_common.sh@941 -- # uname 00:23:34.223 21:45:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:34.223 21:45:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85160 00:23:34.223 killing process with pid 85160 00:23:34.223 21:45:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:34.223 21:45:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:34.223 21:45:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85160' 00:23:34.223 21:45:54 -- common/autotest_common.sh@955 -- # kill 85160 00:23:34.223 [2024-12-06 21:45:54.651631] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:34.223 [2024-12-06 21:45:54.651703] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:34.223 21:45:54 -- common/autotest_common.sh@960 -- # wait 85160 00:23:34.223 [2024-12-06 21:45:54.651800] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:34.223 [2024-12-06 21:45:54.651847] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000cf80 name raid_bdev1, state offline 00:23:34.482 [2024-12-06 21:45:54.910404] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:35.435 00:23:35.435 real 0m17.117s 00:23:35.435 user 0m29.697s 00:23:35.435 sys 0m2.566s 00:23:35.435 21:45:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:35.435 ************************************ 00:23:35.435 END TEST raid5f_superblock_test 00:23:35.435 ************************************ 00:23:35.435 21:45:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:23:35.435 21:45:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:35.435 21:45:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:35.435 21:45:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.435 ************************************ 00:23:35.435 START TEST raid5f_rebuild_test 00:23:35.435 ************************************ 00:23:35.435 21:45:55 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 false false 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=85753 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 85753 /var/tmp/spdk-raid.sock 00:23:35.435 21:45:55 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:35.435 21:45:55 -- common/autotest_common.sh@829 -- # '[' -z 85753 ']' 00:23:35.435 21:45:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:35.435 21:45:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:35.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:35.435 21:45:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:35.435 21:45:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:35.435 21:45:55 -- common/autotest_common.sh@10 -- # set +x 00:23:35.695 [2024-12-06 21:45:55.955375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:35.695 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:35.695 Zero copy mechanism will not be used. 00:23:35.695 [2024-12-06 21:45:55.955615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85753 ] 00:23:35.695 [2024-12-06 21:45:56.129337] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.954 [2024-12-06 21:45:56.349063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.214 [2024-12-06 21:45:56.493295] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:36.473 21:45:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.473 21:45:56 -- common/autotest_common.sh@862 -- # return 0 00:23:36.473 21:45:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:36.473 21:45:56 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:36.473 21:45:56 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:36.732 BaseBdev1 00:23:36.732 21:45:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:36.732 21:45:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:36.732 21:45:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:36.991 BaseBdev2 00:23:36.991 21:45:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:36.991 21:45:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:36.991 21:45:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:37.251 BaseBdev3 00:23:37.251 21:45:57 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:37.251 21:45:57 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:37.251 21:45:57 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:23:37.251 BaseBdev4 00:23:37.251 21:45:57 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:37.510 spare_malloc 00:23:37.510 21:45:57 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:37.768 spare_delay 00:23:37.768 21:45:58 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:38.026 [2024-12-06 21:45:58.320803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:38.026 [2024-12-06 21:45:58.320880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:38.026 [2024-12-06 21:45:58.320907] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008780 00:23:38.026 [2024-12-06 21:45:58.320923] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:38.026 [2024-12-06 21:45:58.323074] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:38.026 [2024-12-06 21:45:58.323116] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:38.026 spare 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:23:38.026 [2024-12-06 21:45:58.492882] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:38.026 [2024-12-06 21:45:58.494610] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:38.026 [2024-12-06 21:45:58.494681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:38.026 [2024-12-06 21:45:58.494730] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:23:38.026 [2024-12-06 21:45:58.494808] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x516000008d80 00:23:38.026 [2024-12-06 21:45:58.494824] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:23:38.026 [2024-12-06 21:45:58.494968] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:23:38.026 [2024-12-06 21:45:58.500481] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x516000008d80 00:23:38.026 [2024-12-06 21:45:58.500504] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x516000008d80 00:23:38.026 [2024-12-06 21:45:58.500754] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:38.026 21:45:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:38.027 21:45:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.027 21:45:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.285 21:45:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:38.285 "name": "raid_bdev1", 00:23:38.285 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:38.285 "strip_size_kb": 64, 00:23:38.285 "state": "online", 00:23:38.285 "raid_level": "raid5f", 00:23:38.285 "superblock": false, 00:23:38.285 "num_base_bdevs": 4, 00:23:38.285 "num_base_bdevs_discovered": 4, 00:23:38.285 "num_base_bdevs_operational": 4, 00:23:38.285 "base_bdevs_list": [ 00:23:38.285 { 00:23:38.285 "name": "BaseBdev1", 00:23:38.285 "uuid": "73cae116-6100-44f2-9952-77f9d78b133c", 00:23:38.285 "is_configured": true, 00:23:38.285 "data_offset": 0, 00:23:38.285 "data_size": 65536 00:23:38.285 }, 00:23:38.285 { 00:23:38.285 "name": "BaseBdev2", 00:23:38.285 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:38.285 "is_configured": true, 00:23:38.285 "data_offset": 0, 00:23:38.285 "data_size": 65536 00:23:38.285 }, 00:23:38.285 { 00:23:38.285 "name": "BaseBdev3", 00:23:38.285 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:38.285 "is_configured": true, 00:23:38.285 "data_offset": 0, 00:23:38.285 "data_size": 65536 00:23:38.285 }, 00:23:38.285 { 00:23:38.285 "name": "BaseBdev4", 00:23:38.285 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:38.285 "is_configured": true, 00:23:38.285 "data_offset": 0, 00:23:38.285 "data_size": 65536 00:23:38.285 } 00:23:38.285 ] 00:23:38.285 }' 00:23:38.285 21:45:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:38.285 21:45:58 -- common/autotest_common.sh@10 -- # set +x 00:23:38.543 21:45:59 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:38.543 21:45:59 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:38.801 [2024-12-06 21:45:59.214507] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:38.801 21:45:59 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:23:38.801 21:45:59 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:38.801 21:45:59 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:39.059 21:45:59 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:39.059 21:45:59 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:39.059 21:45:59 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:39.059 21:45:59 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@12 -- # local i 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.059 21:45:59 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:39.317 [2024-12-06 21:45:59.578475] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:23:39.317 /dev/nbd0 00:23:39.317 21:45:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:39.317 21:45:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:39.317 21:45:59 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:39.317 21:45:59 -- common/autotest_common.sh@867 -- # local i 00:23:39.317 21:45:59 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:39.317 21:45:59 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:39.317 21:45:59 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:39.317 21:45:59 -- common/autotest_common.sh@871 -- # break 00:23:39.317 21:45:59 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:39.317 21:45:59 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:39.317 21:45:59 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:39.317 1+0 records in 00:23:39.317 1+0 records out 00:23:39.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296809 s, 13.8 MB/s 00:23:39.317 21:45:59 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.317 21:45:59 -- common/autotest_common.sh@884 -- # size=4096 00:23:39.317 21:45:59 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:39.317 21:45:59 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:39.317 21:45:59 -- common/autotest_common.sh@887 -- # return 0 00:23:39.317 21:45:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:39.317 21:45:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:39.317 21:45:59 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:39.317 21:45:59 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:23:39.318 21:45:59 -- bdev/bdev_raid.sh@582 -- # echo 192 00:23:39.318 21:45:59 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:23:39.884 512+0 records in 00:23:39.884 512+0 records out 00:23:39.884 100663296 bytes (101 MB, 96 MiB) copied, 0.482828 s, 208 MB/s 00:23:39.884 21:46:00 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@51 -- # local i 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:39.885 [2024-12-06 21:46:00.350687] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.885 21:46:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:40.144 21:46:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:40.144 21:46:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:40.144 21:46:00 -- bdev/nbd_common.sh@41 -- # break 00:23:40.144 21:46:00 -- bdev/nbd_common.sh@45 -- # return 0 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:40.144 [2024-12-06 21:46:00.553746] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.144 21:46:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.403 21:46:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:40.403 "name": "raid_bdev1", 00:23:40.403 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:40.403 "strip_size_kb": 64, 00:23:40.403 "state": "online", 00:23:40.403 "raid_level": "raid5f", 00:23:40.403 "superblock": false, 00:23:40.403 "num_base_bdevs": 4, 00:23:40.403 "num_base_bdevs_discovered": 3, 00:23:40.403 "num_base_bdevs_operational": 3, 00:23:40.403 "base_bdevs_list": [ 00:23:40.403 { 00:23:40.403 "name": null, 00:23:40.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:40.403 "is_configured": false, 00:23:40.403 "data_offset": 0, 00:23:40.403 "data_size": 65536 00:23:40.403 }, 00:23:40.403 { 00:23:40.403 "name": "BaseBdev2", 00:23:40.403 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:40.403 "is_configured": true, 00:23:40.403 "data_offset": 0, 00:23:40.403 "data_size": 65536 00:23:40.403 }, 00:23:40.403 { 00:23:40.403 "name": "BaseBdev3", 00:23:40.403 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:40.403 "is_configured": true, 00:23:40.403 "data_offset": 0, 00:23:40.403 "data_size": 65536 00:23:40.403 }, 00:23:40.403 { 00:23:40.403 "name": "BaseBdev4", 00:23:40.403 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:40.403 "is_configured": true, 00:23:40.403 "data_offset": 0, 00:23:40.403 "data_size": 65536 00:23:40.403 } 00:23:40.403 ] 00:23:40.403 }' 00:23:40.403 21:46:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:40.403 21:46:00 -- common/autotest_common.sh@10 -- # set +x 00:23:40.662 21:46:01 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:40.922 [2024-12-06 21:46:01.357967] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:40.922 [2024-12-06 21:46:01.358013] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:40.922 [2024-12-06 21:46:01.368171] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b000 00:23:40.922 [2024-12-06 21:46:01.375015] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:40.922 21:46:01 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:41.892 21:46:02 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.892 21:46:02 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.892 21:46:02 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:41.893 21:46:02 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:41.893 21:46:02 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.893 21:46:02 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.152 21:46:02 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.152 21:46:02 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.152 "name": "raid_bdev1", 00:23:42.152 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:42.152 "strip_size_kb": 64, 00:23:42.152 "state": "online", 00:23:42.152 "raid_level": "raid5f", 00:23:42.152 "superblock": false, 00:23:42.152 "num_base_bdevs": 4, 00:23:42.152 "num_base_bdevs_discovered": 4, 00:23:42.152 "num_base_bdevs_operational": 4, 00:23:42.152 "process": { 00:23:42.152 "type": "rebuild", 00:23:42.152 "target": "spare", 00:23:42.152 "progress": { 00:23:42.152 "blocks": 23040, 00:23:42.152 "percent": 11 00:23:42.152 } 00:23:42.152 }, 00:23:42.152 "base_bdevs_list": [ 00:23:42.152 { 00:23:42.152 "name": "spare", 00:23:42.152 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:42.152 "is_configured": true, 00:23:42.152 "data_offset": 0, 00:23:42.152 "data_size": 65536 00:23:42.152 }, 00:23:42.152 { 00:23:42.152 "name": "BaseBdev2", 00:23:42.152 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:42.152 "is_configured": true, 00:23:42.152 "data_offset": 0, 00:23:42.152 "data_size": 65536 00:23:42.152 }, 00:23:42.152 { 00:23:42.152 "name": "BaseBdev3", 00:23:42.152 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:42.152 "is_configured": true, 00:23:42.152 "data_offset": 0, 00:23:42.152 "data_size": 65536 00:23:42.152 }, 00:23:42.153 { 00:23:42.153 "name": "BaseBdev4", 00:23:42.153 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:42.153 "is_configured": true, 00:23:42.153 "data_offset": 0, 00:23:42.153 "data_size": 65536 00:23:42.153 } 00:23:42.153 ] 00:23:42.153 }' 00:23:42.153 21:46:02 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.153 21:46:02 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:42.153 21:46:02 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.153 21:46:02 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:42.153 21:46:02 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:42.413 [2024-12-06 21:46:02.868102] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:42.413 [2024-12-06 21:46:02.883990] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:42.413 [2024-12-06 21:46:02.884268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.673 21:46:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.673 21:46:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.673 "name": "raid_bdev1", 00:23:42.673 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:42.673 "strip_size_kb": 64, 00:23:42.673 "state": "online", 00:23:42.673 "raid_level": "raid5f", 00:23:42.673 "superblock": false, 00:23:42.673 "num_base_bdevs": 4, 00:23:42.673 "num_base_bdevs_discovered": 3, 00:23:42.673 "num_base_bdevs_operational": 3, 00:23:42.673 "base_bdevs_list": [ 00:23:42.673 { 00:23:42.673 "name": null, 00:23:42.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:42.673 "is_configured": false, 00:23:42.673 "data_offset": 0, 00:23:42.673 "data_size": 65536 00:23:42.673 }, 00:23:42.673 { 00:23:42.673 "name": "BaseBdev2", 00:23:42.673 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:42.673 "is_configured": true, 00:23:42.673 "data_offset": 0, 00:23:42.673 "data_size": 65536 00:23:42.673 }, 00:23:42.673 { 00:23:42.673 "name": "BaseBdev3", 00:23:42.673 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:42.673 "is_configured": true, 00:23:42.673 "data_offset": 0, 00:23:42.673 "data_size": 65536 00:23:42.673 }, 00:23:42.673 { 00:23:42.673 "name": "BaseBdev4", 00:23:42.673 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:42.673 "is_configured": true, 00:23:42.673 "data_offset": 0, 00:23:42.673 "data_size": 65536 00:23:42.673 } 00:23:42.673 ] 00:23:42.673 }' 00:23:42.673 21:46:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.673 21:46:03 -- common/autotest_common.sh@10 -- # set +x 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.933 21:46:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:43.192 21:46:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:43.192 "name": "raid_bdev1", 00:23:43.192 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:43.192 "strip_size_kb": 64, 00:23:43.192 "state": "online", 00:23:43.192 "raid_level": "raid5f", 00:23:43.192 "superblock": false, 00:23:43.192 "num_base_bdevs": 4, 00:23:43.192 "num_base_bdevs_discovered": 3, 00:23:43.192 "num_base_bdevs_operational": 3, 00:23:43.192 "base_bdevs_list": [ 00:23:43.192 { 00:23:43.192 "name": null, 00:23:43.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:43.193 "is_configured": false, 00:23:43.193 "data_offset": 0, 00:23:43.193 "data_size": 65536 00:23:43.193 }, 00:23:43.193 { 00:23:43.193 "name": "BaseBdev2", 00:23:43.193 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:43.193 "is_configured": true, 00:23:43.193 "data_offset": 0, 00:23:43.193 "data_size": 65536 00:23:43.193 }, 00:23:43.193 { 00:23:43.193 "name": "BaseBdev3", 00:23:43.193 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:43.193 "is_configured": true, 00:23:43.193 "data_offset": 0, 00:23:43.193 "data_size": 65536 00:23:43.193 }, 00:23:43.193 { 00:23:43.193 "name": "BaseBdev4", 00:23:43.193 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:43.193 "is_configured": true, 00:23:43.193 "data_offset": 0, 00:23:43.193 "data_size": 65536 00:23:43.193 } 00:23:43.193 ] 00:23:43.193 }' 00:23:43.193 21:46:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:43.193 21:46:03 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:43.193 21:46:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:43.193 21:46:03 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:43.193 21:46:03 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:43.452 [2024-12-06 21:46:03.843687] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:43.452 [2024-12-06 21:46:03.843750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:43.452 [2024-12-06 21:46:03.853257] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002b0d0 00:23:43.452 [2024-12-06 21:46:03.859995] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:43.452 21:46:03 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.391 21:46:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.651 "name": "raid_bdev1", 00:23:44.651 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:44.651 "strip_size_kb": 64, 00:23:44.651 "state": "online", 00:23:44.651 "raid_level": "raid5f", 00:23:44.651 "superblock": false, 00:23:44.651 "num_base_bdevs": 4, 00:23:44.651 "num_base_bdevs_discovered": 4, 00:23:44.651 "num_base_bdevs_operational": 4, 00:23:44.651 "process": { 00:23:44.651 "type": "rebuild", 00:23:44.651 "target": "spare", 00:23:44.651 "progress": { 00:23:44.651 "blocks": 23040, 00:23:44.651 "percent": 11 00:23:44.651 } 00:23:44.651 }, 00:23:44.651 "base_bdevs_list": [ 00:23:44.651 { 00:23:44.651 "name": "spare", 00:23:44.651 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:44.651 "is_configured": true, 00:23:44.651 "data_offset": 0, 00:23:44.651 "data_size": 65536 00:23:44.651 }, 00:23:44.651 { 00:23:44.651 "name": "BaseBdev2", 00:23:44.651 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:44.651 "is_configured": true, 00:23:44.651 "data_offset": 0, 00:23:44.651 "data_size": 65536 00:23:44.651 }, 00:23:44.651 { 00:23:44.651 "name": "BaseBdev3", 00:23:44.651 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:44.651 "is_configured": true, 00:23:44.651 "data_offset": 0, 00:23:44.651 "data_size": 65536 00:23:44.651 }, 00:23:44.651 { 00:23:44.651 "name": "BaseBdev4", 00:23:44.651 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:44.651 "is_configured": true, 00:23:44.651 "data_offset": 0, 00:23:44.651 "data_size": 65536 00:23:44.651 } 00:23:44.651 ] 00:23:44.651 }' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@657 -- # local timeout=622 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:44.651 21:46:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:44.910 "name": "raid_bdev1", 00:23:44.910 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:44.910 "strip_size_kb": 64, 00:23:44.910 "state": "online", 00:23:44.910 "raid_level": "raid5f", 00:23:44.910 "superblock": false, 00:23:44.910 "num_base_bdevs": 4, 00:23:44.910 "num_base_bdevs_discovered": 4, 00:23:44.910 "num_base_bdevs_operational": 4, 00:23:44.910 "process": { 00:23:44.910 "type": "rebuild", 00:23:44.910 "target": "spare", 00:23:44.910 "progress": { 00:23:44.910 "blocks": 26880, 00:23:44.910 "percent": 13 00:23:44.910 } 00:23:44.910 }, 00:23:44.910 "base_bdevs_list": [ 00:23:44.910 { 00:23:44.910 "name": "spare", 00:23:44.910 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:44.910 "is_configured": true, 00:23:44.910 "data_offset": 0, 00:23:44.910 "data_size": 65536 00:23:44.910 }, 00:23:44.910 { 00:23:44.910 "name": "BaseBdev2", 00:23:44.910 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:44.910 "is_configured": true, 00:23:44.910 "data_offset": 0, 00:23:44.910 "data_size": 65536 00:23:44.910 }, 00:23:44.910 { 00:23:44.910 "name": "BaseBdev3", 00:23:44.910 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:44.910 "is_configured": true, 00:23:44.910 "data_offset": 0, 00:23:44.910 "data_size": 65536 00:23:44.910 }, 00:23:44.910 { 00:23:44.910 "name": "BaseBdev4", 00:23:44.910 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:44.910 "is_configured": true, 00:23:44.910 "data_offset": 0, 00:23:44.910 "data_size": 65536 00:23:44.910 } 00:23:44.910 ] 00:23:44.910 }' 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:44.910 21:46:05 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:45.843 21:46:06 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:45.844 21:46:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:46.102 21:46:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:46.102 "name": "raid_bdev1", 00:23:46.102 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:46.102 "strip_size_kb": 64, 00:23:46.102 "state": "online", 00:23:46.102 "raid_level": "raid5f", 00:23:46.102 "superblock": false, 00:23:46.102 "num_base_bdevs": 4, 00:23:46.102 "num_base_bdevs_discovered": 4, 00:23:46.102 "num_base_bdevs_operational": 4, 00:23:46.102 "process": { 00:23:46.102 "type": "rebuild", 00:23:46.102 "target": "spare", 00:23:46.102 "progress": { 00:23:46.102 "blocks": 49920, 00:23:46.102 "percent": 25 00:23:46.102 } 00:23:46.102 }, 00:23:46.102 "base_bdevs_list": [ 00:23:46.102 { 00:23:46.102 "name": "spare", 00:23:46.102 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:46.102 "is_configured": true, 00:23:46.102 "data_offset": 0, 00:23:46.102 "data_size": 65536 00:23:46.102 }, 00:23:46.102 { 00:23:46.102 "name": "BaseBdev2", 00:23:46.102 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:46.102 "is_configured": true, 00:23:46.102 "data_offset": 0, 00:23:46.102 "data_size": 65536 00:23:46.102 }, 00:23:46.102 { 00:23:46.102 "name": "BaseBdev3", 00:23:46.103 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:46.103 "is_configured": true, 00:23:46.103 "data_offset": 0, 00:23:46.103 "data_size": 65536 00:23:46.103 }, 00:23:46.103 { 00:23:46.103 "name": "BaseBdev4", 00:23:46.103 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:46.103 "is_configured": true, 00:23:46.103 "data_offset": 0, 00:23:46.103 "data_size": 65536 00:23:46.103 } 00:23:46.103 ] 00:23:46.103 }' 00:23:46.103 21:46:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:46.103 21:46:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:46.103 21:46:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:46.103 21:46:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:46.103 21:46:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:47.036 21:46:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:47.037 21:46:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:47.295 21:46:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:47.295 "name": "raid_bdev1", 00:23:47.295 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:47.295 "strip_size_kb": 64, 00:23:47.295 "state": "online", 00:23:47.295 "raid_level": "raid5f", 00:23:47.295 "superblock": false, 00:23:47.295 "num_base_bdevs": 4, 00:23:47.295 "num_base_bdevs_discovered": 4, 00:23:47.295 "num_base_bdevs_operational": 4, 00:23:47.295 "process": { 00:23:47.295 "type": "rebuild", 00:23:47.295 "target": "spare", 00:23:47.295 "progress": { 00:23:47.295 "blocks": 72960, 00:23:47.295 "percent": 37 00:23:47.295 } 00:23:47.295 }, 00:23:47.295 "base_bdevs_list": [ 00:23:47.295 { 00:23:47.295 "name": "spare", 00:23:47.295 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:47.295 "is_configured": true, 00:23:47.295 "data_offset": 0, 00:23:47.295 "data_size": 65536 00:23:47.295 }, 00:23:47.295 { 00:23:47.295 "name": "BaseBdev2", 00:23:47.295 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:47.296 "is_configured": true, 00:23:47.296 "data_offset": 0, 00:23:47.296 "data_size": 65536 00:23:47.296 }, 00:23:47.296 { 00:23:47.296 "name": "BaseBdev3", 00:23:47.296 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:47.296 "is_configured": true, 00:23:47.296 "data_offset": 0, 00:23:47.296 "data_size": 65536 00:23:47.296 }, 00:23:47.296 { 00:23:47.296 "name": "BaseBdev4", 00:23:47.296 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:47.296 "is_configured": true, 00:23:47.296 "data_offset": 0, 00:23:47.296 "data_size": 65536 00:23:47.296 } 00:23:47.296 ] 00:23:47.296 }' 00:23:47.296 21:46:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:47.296 21:46:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:47.296 21:46:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:47.296 21:46:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:47.296 21:46:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:48.672 "name": "raid_bdev1", 00:23:48.672 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:48.672 "strip_size_kb": 64, 00:23:48.672 "state": "online", 00:23:48.672 "raid_level": "raid5f", 00:23:48.672 "superblock": false, 00:23:48.672 "num_base_bdevs": 4, 00:23:48.672 "num_base_bdevs_discovered": 4, 00:23:48.672 "num_base_bdevs_operational": 4, 00:23:48.672 "process": { 00:23:48.672 "type": "rebuild", 00:23:48.672 "target": "spare", 00:23:48.672 "progress": { 00:23:48.672 "blocks": 96000, 00:23:48.672 "percent": 48 00:23:48.672 } 00:23:48.672 }, 00:23:48.672 "base_bdevs_list": [ 00:23:48.672 { 00:23:48.672 "name": "spare", 00:23:48.672 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:48.672 "is_configured": true, 00:23:48.672 "data_offset": 0, 00:23:48.672 "data_size": 65536 00:23:48.672 }, 00:23:48.672 { 00:23:48.672 "name": "BaseBdev2", 00:23:48.672 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:48.672 "is_configured": true, 00:23:48.672 "data_offset": 0, 00:23:48.672 "data_size": 65536 00:23:48.672 }, 00:23:48.672 { 00:23:48.672 "name": "BaseBdev3", 00:23:48.672 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:48.672 "is_configured": true, 00:23:48.672 "data_offset": 0, 00:23:48.672 "data_size": 65536 00:23:48.672 }, 00:23:48.672 { 00:23:48.672 "name": "BaseBdev4", 00:23:48.672 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:48.672 "is_configured": true, 00:23:48.672 "data_offset": 0, 00:23:48.672 "data_size": 65536 00:23:48.672 } 00:23:48.672 ] 00:23:48.672 }' 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:48.672 21:46:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:48.672 21:46:09 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:48.672 21:46:09 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.606 21:46:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:49.864 "name": "raid_bdev1", 00:23:49.864 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:49.864 "strip_size_kb": 64, 00:23:49.864 "state": "online", 00:23:49.864 "raid_level": "raid5f", 00:23:49.864 "superblock": false, 00:23:49.864 "num_base_bdevs": 4, 00:23:49.864 "num_base_bdevs_discovered": 4, 00:23:49.864 "num_base_bdevs_operational": 4, 00:23:49.864 "process": { 00:23:49.864 "type": "rebuild", 00:23:49.864 "target": "spare", 00:23:49.864 "progress": { 00:23:49.864 "blocks": 120960, 00:23:49.864 "percent": 61 00:23:49.864 } 00:23:49.864 }, 00:23:49.864 "base_bdevs_list": [ 00:23:49.864 { 00:23:49.864 "name": "spare", 00:23:49.864 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 0, 00:23:49.864 "data_size": 65536 00:23:49.864 }, 00:23:49.864 { 00:23:49.864 "name": "BaseBdev2", 00:23:49.864 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 0, 00:23:49.864 "data_size": 65536 00:23:49.864 }, 00:23:49.864 { 00:23:49.864 "name": "BaseBdev3", 00:23:49.864 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 0, 00:23:49.864 "data_size": 65536 00:23:49.864 }, 00:23:49.864 { 00:23:49.864 "name": "BaseBdev4", 00:23:49.864 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:49.864 "is_configured": true, 00:23:49.864 "data_offset": 0, 00:23:49.864 "data_size": 65536 00:23:49.864 } 00:23:49.864 ] 00:23:49.864 }' 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:49.864 21:46:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:50.800 21:46:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:51.059 "name": "raid_bdev1", 00:23:51.059 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:51.059 "strip_size_kb": 64, 00:23:51.059 "state": "online", 00:23:51.059 "raid_level": "raid5f", 00:23:51.059 "superblock": false, 00:23:51.059 "num_base_bdevs": 4, 00:23:51.059 "num_base_bdevs_discovered": 4, 00:23:51.059 "num_base_bdevs_operational": 4, 00:23:51.059 "process": { 00:23:51.059 "type": "rebuild", 00:23:51.059 "target": "spare", 00:23:51.059 "progress": { 00:23:51.059 "blocks": 144000, 00:23:51.059 "percent": 73 00:23:51.059 } 00:23:51.059 }, 00:23:51.059 "base_bdevs_list": [ 00:23:51.059 { 00:23:51.059 "name": "spare", 00:23:51.059 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:51.059 "is_configured": true, 00:23:51.059 "data_offset": 0, 00:23:51.059 "data_size": 65536 00:23:51.059 }, 00:23:51.059 { 00:23:51.059 "name": "BaseBdev2", 00:23:51.059 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:51.059 "is_configured": true, 00:23:51.059 "data_offset": 0, 00:23:51.059 "data_size": 65536 00:23:51.059 }, 00:23:51.059 { 00:23:51.059 "name": "BaseBdev3", 00:23:51.059 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:51.059 "is_configured": true, 00:23:51.059 "data_offset": 0, 00:23:51.059 "data_size": 65536 00:23:51.059 }, 00:23:51.059 { 00:23:51.059 "name": "BaseBdev4", 00:23:51.059 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:51.059 "is_configured": true, 00:23:51.059 "data_offset": 0, 00:23:51.059 "data_size": 65536 00:23:51.059 } 00:23:51.059 ] 00:23:51.059 }' 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:51.059 21:46:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:52.439 "name": "raid_bdev1", 00:23:52.439 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:52.439 "strip_size_kb": 64, 00:23:52.439 "state": "online", 00:23:52.439 "raid_level": "raid5f", 00:23:52.439 "superblock": false, 00:23:52.439 "num_base_bdevs": 4, 00:23:52.439 "num_base_bdevs_discovered": 4, 00:23:52.439 "num_base_bdevs_operational": 4, 00:23:52.439 "process": { 00:23:52.439 "type": "rebuild", 00:23:52.439 "target": "spare", 00:23:52.439 "progress": { 00:23:52.439 "blocks": 168960, 00:23:52.439 "percent": 85 00:23:52.439 } 00:23:52.439 }, 00:23:52.439 "base_bdevs_list": [ 00:23:52.439 { 00:23:52.439 "name": "spare", 00:23:52.439 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:52.439 "is_configured": true, 00:23:52.439 "data_offset": 0, 00:23:52.439 "data_size": 65536 00:23:52.439 }, 00:23:52.439 { 00:23:52.439 "name": "BaseBdev2", 00:23:52.439 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:52.439 "is_configured": true, 00:23:52.439 "data_offset": 0, 00:23:52.439 "data_size": 65536 00:23:52.439 }, 00:23:52.439 { 00:23:52.439 "name": "BaseBdev3", 00:23:52.439 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:52.439 "is_configured": true, 00:23:52.439 "data_offset": 0, 00:23:52.439 "data_size": 65536 00:23:52.439 }, 00:23:52.439 { 00:23:52.439 "name": "BaseBdev4", 00:23:52.439 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:52.439 "is_configured": true, 00:23:52.439 "data_offset": 0, 00:23:52.439 "data_size": 65536 00:23:52.439 } 00:23:52.439 ] 00:23:52.439 }' 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:52.439 21:46:12 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:53.376 21:46:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:53.635 "name": "raid_bdev1", 00:23:53.635 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:53.635 "strip_size_kb": 64, 00:23:53.635 "state": "online", 00:23:53.635 "raid_level": "raid5f", 00:23:53.635 "superblock": false, 00:23:53.635 "num_base_bdevs": 4, 00:23:53.635 "num_base_bdevs_discovered": 4, 00:23:53.635 "num_base_bdevs_operational": 4, 00:23:53.635 "process": { 00:23:53.635 "type": "rebuild", 00:23:53.635 "target": "spare", 00:23:53.635 "progress": { 00:23:53.635 "blocks": 192000, 00:23:53.635 "percent": 97 00:23:53.635 } 00:23:53.635 }, 00:23:53.635 "base_bdevs_list": [ 00:23:53.635 { 00:23:53.635 "name": "spare", 00:23:53.635 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:53.635 "is_configured": true, 00:23:53.635 "data_offset": 0, 00:23:53.635 "data_size": 65536 00:23:53.635 }, 00:23:53.635 { 00:23:53.635 "name": "BaseBdev2", 00:23:53.635 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:53.635 "is_configured": true, 00:23:53.635 "data_offset": 0, 00:23:53.635 "data_size": 65536 00:23:53.635 }, 00:23:53.635 { 00:23:53.635 "name": "BaseBdev3", 00:23:53.635 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:53.635 "is_configured": true, 00:23:53.635 "data_offset": 0, 00:23:53.635 "data_size": 65536 00:23:53.635 }, 00:23:53.635 { 00:23:53.635 "name": "BaseBdev4", 00:23:53.635 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:53.635 "is_configured": true, 00:23:53.635 "data_offset": 0, 00:23:53.635 "data_size": 65536 00:23:53.635 } 00:23:53.635 ] 00:23:53.635 }' 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:53.635 21:46:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:53.893 [2024-12-06 21:46:14.220367] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:53.893 [2024-12-06 21:46:14.220473] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:53.893 [2024-12-06 21:46:14.220539] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.829 "name": "raid_bdev1", 00:23:54.829 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:54.829 "strip_size_kb": 64, 00:23:54.829 "state": "online", 00:23:54.829 "raid_level": "raid5f", 00:23:54.829 "superblock": false, 00:23:54.829 "num_base_bdevs": 4, 00:23:54.829 "num_base_bdevs_discovered": 4, 00:23:54.829 "num_base_bdevs_operational": 4, 00:23:54.829 "base_bdevs_list": [ 00:23:54.829 { 00:23:54.829 "name": "spare", 00:23:54.829 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:54.829 "is_configured": true, 00:23:54.829 "data_offset": 0, 00:23:54.829 "data_size": 65536 00:23:54.829 }, 00:23:54.829 { 00:23:54.829 "name": "BaseBdev2", 00:23:54.829 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:54.829 "is_configured": true, 00:23:54.829 "data_offset": 0, 00:23:54.829 "data_size": 65536 00:23:54.829 }, 00:23:54.829 { 00:23:54.829 "name": "BaseBdev3", 00:23:54.829 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:54.829 "is_configured": true, 00:23:54.829 "data_offset": 0, 00:23:54.829 "data_size": 65536 00:23:54.829 }, 00:23:54.829 { 00:23:54.829 "name": "BaseBdev4", 00:23:54.829 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:54.829 "is_configured": true, 00:23:54.829 "data_offset": 0, 00:23:54.829 "data_size": 65536 00:23:54.829 } 00:23:54.829 ] 00:23:54.829 }' 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@660 -- # break 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.829 21:46:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:55.087 "name": "raid_bdev1", 00:23:55.087 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:55.087 "strip_size_kb": 64, 00:23:55.087 "state": "online", 00:23:55.087 "raid_level": "raid5f", 00:23:55.087 "superblock": false, 00:23:55.087 "num_base_bdevs": 4, 00:23:55.087 "num_base_bdevs_discovered": 4, 00:23:55.087 "num_base_bdevs_operational": 4, 00:23:55.087 "base_bdevs_list": [ 00:23:55.087 { 00:23:55.087 "name": "spare", 00:23:55.087 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:55.087 "is_configured": true, 00:23:55.087 "data_offset": 0, 00:23:55.087 "data_size": 65536 00:23:55.087 }, 00:23:55.087 { 00:23:55.087 "name": "BaseBdev2", 00:23:55.087 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:55.087 "is_configured": true, 00:23:55.087 "data_offset": 0, 00:23:55.087 "data_size": 65536 00:23:55.087 }, 00:23:55.087 { 00:23:55.087 "name": "BaseBdev3", 00:23:55.087 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:55.087 "is_configured": true, 00:23:55.087 "data_offset": 0, 00:23:55.087 "data_size": 65536 00:23:55.087 }, 00:23:55.087 { 00:23:55.087 "name": "BaseBdev4", 00:23:55.087 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:55.087 "is_configured": true, 00:23:55.087 "data_offset": 0, 00:23:55.087 "data_size": 65536 00:23:55.087 } 00:23:55.087 ] 00:23:55.087 }' 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.087 21:46:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.345 21:46:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.345 "name": "raid_bdev1", 00:23:55.345 "uuid": "2793a7c2-f2d7-4078-8ae0-58a0747cc5b0", 00:23:55.345 "strip_size_kb": 64, 00:23:55.345 "state": "online", 00:23:55.345 "raid_level": "raid5f", 00:23:55.345 "superblock": false, 00:23:55.345 "num_base_bdevs": 4, 00:23:55.345 "num_base_bdevs_discovered": 4, 00:23:55.345 "num_base_bdevs_operational": 4, 00:23:55.345 "base_bdevs_list": [ 00:23:55.345 { 00:23:55.345 "name": "spare", 00:23:55.345 "uuid": "a65d539b-0efe-5b53-a1a0-b0fd5b97b45d", 00:23:55.345 "is_configured": true, 00:23:55.345 "data_offset": 0, 00:23:55.345 "data_size": 65536 00:23:55.345 }, 00:23:55.345 { 00:23:55.345 "name": "BaseBdev2", 00:23:55.345 "uuid": "2f1ac889-df24-4329-b1fc-6ea7dd82f33d", 00:23:55.345 "is_configured": true, 00:23:55.345 "data_offset": 0, 00:23:55.345 "data_size": 65536 00:23:55.345 }, 00:23:55.345 { 00:23:55.345 "name": "BaseBdev3", 00:23:55.345 "uuid": "9d980659-c781-4bf6-a168-4c7c1ec9a858", 00:23:55.345 "is_configured": true, 00:23:55.345 "data_offset": 0, 00:23:55.345 "data_size": 65536 00:23:55.345 }, 00:23:55.345 { 00:23:55.345 "name": "BaseBdev4", 00:23:55.345 "uuid": "3501c210-1863-45b3-925c-9e1496e4efb3", 00:23:55.345 "is_configured": true, 00:23:55.345 "data_offset": 0, 00:23:55.345 "data_size": 65536 00:23:55.345 } 00:23:55.345 ] 00:23:55.345 }' 00:23:55.345 21:46:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.345 21:46:15 -- common/autotest_common.sh@10 -- # set +x 00:23:55.604 21:46:16 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:55.862 [2024-12-06 21:46:16.248614] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:55.862 [2024-12-06 21:46:16.248823] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:55.862 [2024-12-06 21:46:16.248943] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:55.862 [2024-12-06 21:46:16.249043] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:55.862 [2024-12-06 21:46:16.249060] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x516000008d80 name raid_bdev1, state offline 00:23:55.862 21:46:16 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.862 21:46:16 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:56.120 21:46:16 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:56.120 21:46:16 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:56.120 21:46:16 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:56.120 21:46:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@12 -- # local i 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:56.121 21:46:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:56.379 /dev/nbd0 00:23:56.379 21:46:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:56.379 21:46:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:56.379 21:46:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:23:56.379 21:46:16 -- common/autotest_common.sh@867 -- # local i 00:23:56.379 21:46:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:56.379 21:46:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:56.379 21:46:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:23:56.379 21:46:16 -- common/autotest_common.sh@871 -- # break 00:23:56.379 21:46:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:56.379 21:46:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:56.379 21:46:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:56.379 1+0 records in 00:23:56.379 1+0 records out 00:23:56.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230572 s, 17.8 MB/s 00:23:56.379 21:46:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:56.379 21:46:16 -- common/autotest_common.sh@884 -- # size=4096 00:23:56.379 21:46:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:56.379 21:46:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:56.379 21:46:16 -- common/autotest_common.sh@887 -- # return 0 00:23:56.379 21:46:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:56.379 21:46:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:56.379 21:46:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:56.638 /dev/nbd1 00:23:56.638 21:46:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:56.638 21:46:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:56.638 21:46:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:23:56.638 21:46:17 -- common/autotest_common.sh@867 -- # local i 00:23:56.638 21:46:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:23:56.638 21:46:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:23:56.638 21:46:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:23:56.638 21:46:17 -- common/autotest_common.sh@871 -- # break 00:23:56.638 21:46:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:23:56.638 21:46:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:23:56.638 21:46:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:56.638 1+0 records in 00:23:56.638 1+0 records out 00:23:56.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309307 s, 13.2 MB/s 00:23:56.638 21:46:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:56.638 21:46:17 -- common/autotest_common.sh@884 -- # size=4096 00:23:56.638 21:46:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:56.638 21:46:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:23:56.638 21:46:17 -- common/autotest_common.sh@887 -- # return 0 00:23:56.638 21:46:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:56.638 21:46:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:56.638 21:46:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:56.897 21:46:17 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@51 -- # local i 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:56.897 21:46:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@41 -- # break 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@45 -- # return 0 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:57.156 21:46:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@41 -- # break 00:23:57.415 21:46:17 -- bdev/nbd_common.sh@45 -- # return 0 00:23:57.415 21:46:17 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:57.415 21:46:17 -- bdev/bdev_raid.sh@709 -- # killprocess 85753 00:23:57.415 21:46:17 -- common/autotest_common.sh@936 -- # '[' -z 85753 ']' 00:23:57.415 21:46:17 -- common/autotest_common.sh@940 -- # kill -0 85753 00:23:57.415 21:46:17 -- common/autotest_common.sh@941 -- # uname 00:23:57.415 21:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:57.415 21:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85753 00:23:57.415 21:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:57.415 killing process with pid 85753 00:23:57.415 21:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:57.415 21:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85753' 00:23:57.415 21:46:17 -- common/autotest_common.sh@955 -- # kill 85753 00:23:57.415 Received shutdown signal, test time was about 60.000000 seconds 00:23:57.415 00:23:57.415 Latency(us) 00:23:57.415 [2024-12-06T21:46:17.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.415 [2024-12-06T21:46:17.912Z] =================================================================================================================== 00:23:57.415 [2024-12-06T21:46:17.912Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:57.415 [2024-12-06 21:46:17.765355] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:57.415 21:46:17 -- common/autotest_common.sh@960 -- # wait 85753 00:23:57.674 [2024-12-06 21:46:18.074702] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:58.613 21:46:18 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:58.613 00:23:58.613 real 0m23.102s 00:23:58.613 user 0m31.076s 00:23:58.613 sys 0m2.690s 00:23:58.613 ************************************ 00:23:58.613 END TEST raid5f_rebuild_test 00:23:58.613 ************************************ 00:23:58.613 21:46:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:58.613 21:46:18 -- common/autotest_common.sh@10 -- # set +x 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:23:58.613 21:46:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:23:58.613 21:46:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:58.613 21:46:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.613 ************************************ 00:23:58.613 START TEST raid5f_rebuild_test_sb 00:23:58.613 ************************************ 00:23:58.613 21:46:19 -- common/autotest_common.sh@1114 -- # raid_rebuild_test raid5f 4 true false 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@544 -- # raid_pid=86317 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@545 -- # waitforlisten 86317 /var/tmp/spdk-raid.sock 00:23:58.613 21:46:19 -- common/autotest_common.sh@829 -- # '[' -z 86317 ']' 00:23:58.613 21:46:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:58.613 21:46:19 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:58.613 21:46:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:58.613 21:46:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:58.613 21:46:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.613 21:46:19 -- common/autotest_common.sh@10 -- # set +x 00:23:58.873 [2024-12-06 21:46:19.113766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:58.873 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:58.873 Zero copy mechanism will not be used. 00:23:58.873 [2024-12-06 21:46:19.113945] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86317 ] 00:23:58.873 [2024-12-06 21:46:19.280402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.133 [2024-12-06 21:46:19.440431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.133 [2024-12-06 21:46:19.587603] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:59.701 21:46:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.701 21:46:20 -- common/autotest_common.sh@862 -- # return 0 00:23:59.701 21:46:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:59.701 21:46:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:59.701 21:46:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:59.961 BaseBdev1_malloc 00:23:59.961 21:46:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:59.961 [2024-12-06 21:46:20.396285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:59.961 [2024-12-06 21:46:20.396836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:59.961 [2024-12-06 21:46:20.396983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006980 00:23:59.961 [2024-12-06 21:46:20.397105] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:59.961 [2024-12-06 21:46:20.399260] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:59.961 [2024-12-06 21:46:20.399386] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:59.961 BaseBdev1 00:23:59.961 21:46:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:59.961 21:46:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:59.961 21:46:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:24:00.220 BaseBdev2_malloc 00:24:00.220 21:46:20 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:00.479 [2024-12-06 21:46:20.864392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:00.480 [2024-12-06 21:46:20.864663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.480 [2024-12-06 21:46:20.864836] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007580 00:24:00.480 [2024-12-06 21:46:20.864952] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.480 [2024-12-06 21:46:20.867098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.480 [2024-12-06 21:46:20.867234] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:00.480 BaseBdev2 00:24:00.480 21:46:20 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:00.480 21:46:20 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:00.480 21:46:20 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:24:00.740 BaseBdev3_malloc 00:24:00.740 21:46:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:00.999 [2024-12-06 21:46:21.292765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:00.999 [2024-12-06 21:46:21.293022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:00.999 [2024-12-06 21:46:21.293168] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008180 00:24:00.999 [2024-12-06 21:46:21.293259] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:00.999 [2024-12-06 21:46:21.295345] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:00.999 [2024-12-06 21:46:21.295530] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:00.999 BaseBdev3 00:24:00.999 21:46:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:24:00.999 21:46:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:24:00.999 21:46:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:24:00.999 BaseBdev4_malloc 00:24:01.258 21:46:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:01.258 [2024-12-06 21:46:21.677266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:01.258 [2024-12-06 21:46:21.677722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.259 [2024-12-06 21:46:21.677865] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008d80 00:24:01.259 [2024-12-06 21:46:21.677951] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.259 [2024-12-06 21:46:21.680056] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.259 [2024-12-06 21:46:21.680175] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:01.259 BaseBdev4 00:24:01.259 21:46:21 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:24:01.518 spare_malloc 00:24:01.518 21:46:21 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:24:01.777 spare_delay 00:24:01.777 21:46:22 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:01.777 [2024-12-06 21:46:22.246744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:01.777 [2024-12-06 21:46:22.246951] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:01.777 [2024-12-06 21:46:22.247111] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80 00:24:01.777 [2024-12-06 21:46:22.247214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:01.777 [2024-12-06 21:46:22.249379] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:01.777 [2024-12-06 21:46:22.249558] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:01.777 spare 00:24:01.777 21:46:22 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:24:02.036 [2024-12-06 21:46:22.482854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:02.036 [2024-12-06 21:46:22.484668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:02.036 [2024-12-06 21:46:22.484741] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:02.036 [2024-12-06 21:46:22.484807] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:02.036 [2024-12-06 21:46:22.485090] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000a580 00:24:02.036 [2024-12-06 21:46:22.485122] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:02.036 [2024-12-06 21:46:22.485238] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005860 00:24:02.036 [2024-12-06 21:46:22.490977] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000a580 00:24:02.036 [2024-12-06 21:46:22.491003] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000a580 00:24:02.036 [2024-12-06 21:46:22.491229] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.036 21:46:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:02.295 21:46:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:02.295 "name": "raid_bdev1", 00:24:02.295 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:02.295 "strip_size_kb": 64, 00:24:02.295 "state": "online", 00:24:02.295 "raid_level": "raid5f", 00:24:02.295 "superblock": true, 00:24:02.295 "num_base_bdevs": 4, 00:24:02.295 "num_base_bdevs_discovered": 4, 00:24:02.295 "num_base_bdevs_operational": 4, 00:24:02.295 "base_bdevs_list": [ 00:24:02.295 { 00:24:02.295 "name": "BaseBdev1", 00:24:02.295 "uuid": "f7ebab09-b867-51b4-941c-97ae8fca2b11", 00:24:02.295 "is_configured": true, 00:24:02.295 "data_offset": 2048, 00:24:02.295 "data_size": 63488 00:24:02.295 }, 00:24:02.295 { 00:24:02.295 "name": "BaseBdev2", 00:24:02.295 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:02.295 "is_configured": true, 00:24:02.295 "data_offset": 2048, 00:24:02.295 "data_size": 63488 00:24:02.295 }, 00:24:02.295 { 00:24:02.295 "name": "BaseBdev3", 00:24:02.295 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:02.295 "is_configured": true, 00:24:02.295 "data_offset": 2048, 00:24:02.295 "data_size": 63488 00:24:02.295 }, 00:24:02.295 { 00:24:02.295 "name": "BaseBdev4", 00:24:02.295 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:02.295 "is_configured": true, 00:24:02.295 "data_offset": 2048, 00:24:02.295 "data_size": 63488 00:24:02.295 } 00:24:02.295 ] 00:24:02.295 }' 00:24:02.295 21:46:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:02.295 21:46:22 -- common/autotest_common.sh@10 -- # set +x 00:24:02.554 21:46:22 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:02.554 21:46:22 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:24:02.812 [2024-12-06 21:46:23.225216] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:02.812 21:46:23 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:24:02.812 21:46:23 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.812 21:46:23 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:24:03.071 21:46:23 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:24:03.071 21:46:23 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:24:03.071 21:46:23 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:24:03.071 21:46:23 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@12 -- # local i 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.071 21:46:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:24:03.330 [2024-12-06 21:46:23.637194] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000005a00 00:24:03.330 /dev/nbd0 00:24:03.330 21:46:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:03.330 21:46:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:03.330 21:46:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:03.330 21:46:23 -- common/autotest_common.sh@867 -- # local i 00:24:03.330 21:46:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:03.330 21:46:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:03.330 21:46:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:03.330 21:46:23 -- common/autotest_common.sh@871 -- # break 00:24:03.330 21:46:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:03.330 21:46:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:03.330 21:46:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:03.330 1+0 records in 00:24:03.330 1+0 records out 00:24:03.330 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293486 s, 14.0 MB/s 00:24:03.330 21:46:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.330 21:46:23 -- common/autotest_common.sh@884 -- # size=4096 00:24:03.330 21:46:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:03.330 21:46:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:03.330 21:46:23 -- common/autotest_common.sh@887 -- # return 0 00:24:03.330 21:46:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:03.330 21:46:23 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:24:03.330 21:46:23 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:24:03.330 21:46:23 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:24:03.330 21:46:23 -- bdev/bdev_raid.sh@582 -- # echo 192 00:24:03.330 21:46:23 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:24:03.903 496+0 records in 00:24:03.903 496+0 records out 00:24:03.903 97517568 bytes (98 MB, 93 MiB) copied, 0.476532 s, 205 MB/s 00:24:03.903 21:46:24 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@51 -- # local i 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:03.903 [2024-12-06 21:46:24.362570] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@41 -- # break 00:24:03.903 21:46:24 -- bdev/nbd_common.sh@45 -- # return 0 00:24:03.903 21:46:24 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:24:04.163 [2024-12-06 21:46:24.538043] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.163 21:46:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:04.422 21:46:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:04.422 "name": "raid_bdev1", 00:24:04.422 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:04.422 "strip_size_kb": 64, 00:24:04.422 "state": "online", 00:24:04.422 "raid_level": "raid5f", 00:24:04.422 "superblock": true, 00:24:04.422 "num_base_bdevs": 4, 00:24:04.422 "num_base_bdevs_discovered": 3, 00:24:04.422 "num_base_bdevs_operational": 3, 00:24:04.422 "base_bdevs_list": [ 00:24:04.422 { 00:24:04.422 "name": null, 00:24:04.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:04.422 "is_configured": false, 00:24:04.422 "data_offset": 2048, 00:24:04.422 "data_size": 63488 00:24:04.422 }, 00:24:04.422 { 00:24:04.422 "name": "BaseBdev2", 00:24:04.422 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:04.422 "is_configured": true, 00:24:04.422 "data_offset": 2048, 00:24:04.422 "data_size": 63488 00:24:04.422 }, 00:24:04.422 { 00:24:04.422 "name": "BaseBdev3", 00:24:04.422 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:04.422 "is_configured": true, 00:24:04.422 "data_offset": 2048, 00:24:04.422 "data_size": 63488 00:24:04.422 }, 00:24:04.422 { 00:24:04.422 "name": "BaseBdev4", 00:24:04.422 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:04.422 "is_configured": true, 00:24:04.422 "data_offset": 2048, 00:24:04.422 "data_size": 63488 00:24:04.422 } 00:24:04.422 ] 00:24:04.422 }' 00:24:04.422 21:46:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:04.422 21:46:24 -- common/autotest_common.sh@10 -- # set +x 00:24:04.695 21:46:25 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:04.957 [2024-12-06 21:46:25.202148] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:04.957 [2024-12-06 21:46:25.202205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:04.957 [2024-12-06 21:46:25.212501] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a300 00:24:04.957 [2024-12-06 21:46:25.219585] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:04.957 21:46:25 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:05.891 21:46:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.149 21:46:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:06.150 "name": "raid_bdev1", 00:24:06.150 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:06.150 "strip_size_kb": 64, 00:24:06.150 "state": "online", 00:24:06.150 "raid_level": "raid5f", 00:24:06.150 "superblock": true, 00:24:06.150 "num_base_bdevs": 4, 00:24:06.150 "num_base_bdevs_discovered": 4, 00:24:06.150 "num_base_bdevs_operational": 4, 00:24:06.150 "process": { 00:24:06.150 "type": "rebuild", 00:24:06.150 "target": "spare", 00:24:06.150 "progress": { 00:24:06.150 "blocks": 23040, 00:24:06.150 "percent": 12 00:24:06.150 } 00:24:06.150 }, 00:24:06.150 "base_bdevs_list": [ 00:24:06.150 { 00:24:06.150 "name": "spare", 00:24:06.150 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:06.150 "is_configured": true, 00:24:06.150 "data_offset": 2048, 00:24:06.150 "data_size": 63488 00:24:06.150 }, 00:24:06.150 { 00:24:06.150 "name": "BaseBdev2", 00:24:06.150 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:06.150 "is_configured": true, 00:24:06.150 "data_offset": 2048, 00:24:06.150 "data_size": 63488 00:24:06.150 }, 00:24:06.150 { 00:24:06.150 "name": "BaseBdev3", 00:24:06.150 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:06.150 "is_configured": true, 00:24:06.150 "data_offset": 2048, 00:24:06.150 "data_size": 63488 00:24:06.150 }, 00:24:06.150 { 00:24:06.150 "name": "BaseBdev4", 00:24:06.150 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:06.150 "is_configured": true, 00:24:06.150 "data_offset": 2048, 00:24:06.150 "data_size": 63488 00:24:06.150 } 00:24:06.150 ] 00:24:06.150 }' 00:24:06.150 21:46:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:06.150 21:46:26 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:06.150 21:46:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:06.150 21:46:26 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:06.150 21:46:26 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:24:06.408 [2024-12-06 21:46:26.704969] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:24:06.408 [2024-12-06 21:46:26.729692] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:24:06.408 [2024-12-06 21:46:26.729790] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.408 21:46:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:06.666 21:46:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:06.666 "name": "raid_bdev1", 00:24:06.666 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:06.666 "strip_size_kb": 64, 00:24:06.666 "state": "online", 00:24:06.666 "raid_level": "raid5f", 00:24:06.666 "superblock": true, 00:24:06.666 "num_base_bdevs": 4, 00:24:06.666 "num_base_bdevs_discovered": 3, 00:24:06.666 "num_base_bdevs_operational": 3, 00:24:06.666 "base_bdevs_list": [ 00:24:06.666 { 00:24:06.666 "name": null, 00:24:06.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:06.666 "is_configured": false, 00:24:06.666 "data_offset": 2048, 00:24:06.666 "data_size": 63488 00:24:06.666 }, 00:24:06.666 { 00:24:06.666 "name": "BaseBdev2", 00:24:06.666 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:06.666 "is_configured": true, 00:24:06.666 "data_offset": 2048, 00:24:06.666 "data_size": 63488 00:24:06.666 }, 00:24:06.666 { 00:24:06.666 "name": "BaseBdev3", 00:24:06.666 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:06.666 "is_configured": true, 00:24:06.666 "data_offset": 2048, 00:24:06.666 "data_size": 63488 00:24:06.666 }, 00:24:06.666 { 00:24:06.666 "name": "BaseBdev4", 00:24:06.666 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:06.666 "is_configured": true, 00:24:06.666 "data_offset": 2048, 00:24:06.666 "data_size": 63488 00:24:06.666 } 00:24:06.666 ] 00:24:06.666 }' 00:24:06.666 21:46:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:06.666 21:46:27 -- common/autotest_common.sh@10 -- # set +x 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:06.924 21:46:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:07.182 21:46:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:07.182 "name": "raid_bdev1", 00:24:07.182 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:07.182 "strip_size_kb": 64, 00:24:07.182 "state": "online", 00:24:07.182 "raid_level": "raid5f", 00:24:07.182 "superblock": true, 00:24:07.182 "num_base_bdevs": 4, 00:24:07.182 "num_base_bdevs_discovered": 3, 00:24:07.182 "num_base_bdevs_operational": 3, 00:24:07.182 "base_bdevs_list": [ 00:24:07.182 { 00:24:07.182 "name": null, 00:24:07.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:07.182 "is_configured": false, 00:24:07.182 "data_offset": 2048, 00:24:07.182 "data_size": 63488 00:24:07.182 }, 00:24:07.183 { 00:24:07.183 "name": "BaseBdev2", 00:24:07.183 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:07.183 "is_configured": true, 00:24:07.183 "data_offset": 2048, 00:24:07.183 "data_size": 63488 00:24:07.183 }, 00:24:07.183 { 00:24:07.183 "name": "BaseBdev3", 00:24:07.183 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:07.183 "is_configured": true, 00:24:07.183 "data_offset": 2048, 00:24:07.183 "data_size": 63488 00:24:07.183 }, 00:24:07.183 { 00:24:07.183 "name": "BaseBdev4", 00:24:07.183 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:07.183 "is_configured": true, 00:24:07.183 "data_offset": 2048, 00:24:07.183 "data_size": 63488 00:24:07.183 } 00:24:07.183 ] 00:24:07.183 }' 00:24:07.183 21:46:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:07.183 21:46:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:07.183 21:46:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:07.183 21:46:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:07.183 21:46:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:24:07.440 [2024-12-06 21:46:27.755248] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:24:07.440 [2024-12-06 21:46:27.755308] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:07.440 [2024-12-06 21:46:27.764946] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d00002a3d0 00:24:07.440 [2024-12-06 21:46:27.771621] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:24:07.440 21:46:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.373 21:46:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.632 21:46:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.632 "name": "raid_bdev1", 00:24:08.632 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:08.632 "strip_size_kb": 64, 00:24:08.632 "state": "online", 00:24:08.632 "raid_level": "raid5f", 00:24:08.632 "superblock": true, 00:24:08.632 "num_base_bdevs": 4, 00:24:08.632 "num_base_bdevs_discovered": 4, 00:24:08.632 "num_base_bdevs_operational": 4, 00:24:08.632 "process": { 00:24:08.632 "type": "rebuild", 00:24:08.632 "target": "spare", 00:24:08.632 "progress": { 00:24:08.632 "blocks": 23040, 00:24:08.632 "percent": 12 00:24:08.632 } 00:24:08.632 }, 00:24:08.632 "base_bdevs_list": [ 00:24:08.632 { 00:24:08.632 "name": "spare", 00:24:08.632 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:08.632 "is_configured": true, 00:24:08.632 "data_offset": 2048, 00:24:08.632 "data_size": 63488 00:24:08.632 }, 00:24:08.632 { 00:24:08.632 "name": "BaseBdev2", 00:24:08.632 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:08.632 "is_configured": true, 00:24:08.632 "data_offset": 2048, 00:24:08.632 "data_size": 63488 00:24:08.632 }, 00:24:08.632 { 00:24:08.632 "name": "BaseBdev3", 00:24:08.632 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:08.632 "is_configured": true, 00:24:08.632 "data_offset": 2048, 00:24:08.632 "data_size": 63488 00:24:08.632 }, 00:24:08.632 { 00:24:08.632 "name": "BaseBdev4", 00:24:08.632 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:08.632 "is_configured": true, 00:24:08.632 "data_offset": 2048, 00:24:08.633 "data_size": 63488 00:24:08.633 } 00:24:08.633 ] 00:24:08.633 }' 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:24:08.633 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@657 -- # local timeout=646 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.633 21:46:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.893 "name": "raid_bdev1", 00:24:08.893 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:08.893 "strip_size_kb": 64, 00:24:08.893 "state": "online", 00:24:08.893 "raid_level": "raid5f", 00:24:08.893 "superblock": true, 00:24:08.893 "num_base_bdevs": 4, 00:24:08.893 "num_base_bdevs_discovered": 4, 00:24:08.893 "num_base_bdevs_operational": 4, 00:24:08.893 "process": { 00:24:08.893 "type": "rebuild", 00:24:08.893 "target": "spare", 00:24:08.893 "progress": { 00:24:08.893 "blocks": 26880, 00:24:08.893 "percent": 14 00:24:08.893 } 00:24:08.893 }, 00:24:08.893 "base_bdevs_list": [ 00:24:08.893 { 00:24:08.893 "name": "spare", 00:24:08.893 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:08.893 "is_configured": true, 00:24:08.893 "data_offset": 2048, 00:24:08.893 "data_size": 63488 00:24:08.893 }, 00:24:08.893 { 00:24:08.893 "name": "BaseBdev2", 00:24:08.893 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:08.893 "is_configured": true, 00:24:08.893 "data_offset": 2048, 00:24:08.893 "data_size": 63488 00:24:08.893 }, 00:24:08.893 { 00:24:08.893 "name": "BaseBdev3", 00:24:08.893 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:08.893 "is_configured": true, 00:24:08.893 "data_offset": 2048, 00:24:08.893 "data_size": 63488 00:24:08.893 }, 00:24:08.893 { 00:24:08.893 "name": "BaseBdev4", 00:24:08.893 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:08.893 "is_configured": true, 00:24:08.893 "data_offset": 2048, 00:24:08.893 "data_size": 63488 00:24:08.893 } 00:24:08.893 ] 00:24:08.893 }' 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:08.893 21:46:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.869 21:46:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:10.128 "name": "raid_bdev1", 00:24:10.128 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:10.128 "strip_size_kb": 64, 00:24:10.128 "state": "online", 00:24:10.128 "raid_level": "raid5f", 00:24:10.128 "superblock": true, 00:24:10.128 "num_base_bdevs": 4, 00:24:10.128 "num_base_bdevs_discovered": 4, 00:24:10.128 "num_base_bdevs_operational": 4, 00:24:10.128 "process": { 00:24:10.128 "type": "rebuild", 00:24:10.128 "target": "spare", 00:24:10.128 "progress": { 00:24:10.128 "blocks": 51840, 00:24:10.128 "percent": 27 00:24:10.128 } 00:24:10.128 }, 00:24:10.128 "base_bdevs_list": [ 00:24:10.128 { 00:24:10.128 "name": "spare", 00:24:10.128 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:10.128 "is_configured": true, 00:24:10.128 "data_offset": 2048, 00:24:10.128 "data_size": 63488 00:24:10.128 }, 00:24:10.128 { 00:24:10.128 "name": "BaseBdev2", 00:24:10.128 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:10.128 "is_configured": true, 00:24:10.128 "data_offset": 2048, 00:24:10.128 "data_size": 63488 00:24:10.128 }, 00:24:10.128 { 00:24:10.128 "name": "BaseBdev3", 00:24:10.128 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:10.128 "is_configured": true, 00:24:10.128 "data_offset": 2048, 00:24:10.128 "data_size": 63488 00:24:10.128 }, 00:24:10.128 { 00:24:10.128 "name": "BaseBdev4", 00:24:10.128 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:10.128 "is_configured": true, 00:24:10.128 "data_offset": 2048, 00:24:10.128 "data_size": 63488 00:24:10.128 } 00:24:10.128 ] 00:24:10.128 }' 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:10.128 21:46:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:11.062 21:46:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.063 21:46:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:11.343 "name": "raid_bdev1", 00:24:11.343 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:11.343 "strip_size_kb": 64, 00:24:11.343 "state": "online", 00:24:11.343 "raid_level": "raid5f", 00:24:11.343 "superblock": true, 00:24:11.343 "num_base_bdevs": 4, 00:24:11.343 "num_base_bdevs_discovered": 4, 00:24:11.343 "num_base_bdevs_operational": 4, 00:24:11.343 "process": { 00:24:11.343 "type": "rebuild", 00:24:11.343 "target": "spare", 00:24:11.343 "progress": { 00:24:11.343 "blocks": 74880, 00:24:11.343 "percent": 39 00:24:11.343 } 00:24:11.343 }, 00:24:11.343 "base_bdevs_list": [ 00:24:11.343 { 00:24:11.343 "name": "spare", 00:24:11.343 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:11.343 "is_configured": true, 00:24:11.343 "data_offset": 2048, 00:24:11.343 "data_size": 63488 00:24:11.343 }, 00:24:11.343 { 00:24:11.343 "name": "BaseBdev2", 00:24:11.343 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:11.343 "is_configured": true, 00:24:11.343 "data_offset": 2048, 00:24:11.343 "data_size": 63488 00:24:11.343 }, 00:24:11.343 { 00:24:11.343 "name": "BaseBdev3", 00:24:11.343 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:11.343 "is_configured": true, 00:24:11.343 "data_offset": 2048, 00:24:11.343 "data_size": 63488 00:24:11.343 }, 00:24:11.343 { 00:24:11.343 "name": "BaseBdev4", 00:24:11.343 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:11.343 "is_configured": true, 00:24:11.343 "data_offset": 2048, 00:24:11.343 "data_size": 63488 00:24:11.343 } 00:24:11.343 ] 00:24:11.343 }' 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:11.343 21:46:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:12.720 21:46:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:12.720 21:46:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:12.720 "name": "raid_bdev1", 00:24:12.720 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:12.720 "strip_size_kb": 64, 00:24:12.720 "state": "online", 00:24:12.720 "raid_level": "raid5f", 00:24:12.720 "superblock": true, 00:24:12.720 "num_base_bdevs": 4, 00:24:12.720 "num_base_bdevs_discovered": 4, 00:24:12.720 "num_base_bdevs_operational": 4, 00:24:12.720 "process": { 00:24:12.720 "type": "rebuild", 00:24:12.720 "target": "spare", 00:24:12.720 "progress": { 00:24:12.720 "blocks": 99840, 00:24:12.720 "percent": 52 00:24:12.720 } 00:24:12.720 }, 00:24:12.720 "base_bdevs_list": [ 00:24:12.720 { 00:24:12.720 "name": "spare", 00:24:12.720 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:12.720 "is_configured": true, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 }, 00:24:12.720 { 00:24:12.720 "name": "BaseBdev2", 00:24:12.720 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:12.720 "is_configured": true, 00:24:12.720 "data_offset": 2048, 00:24:12.720 "data_size": 63488 00:24:12.720 }, 00:24:12.720 { 00:24:12.720 "name": "BaseBdev3", 00:24:12.720 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:12.721 "is_configured": true, 00:24:12.721 "data_offset": 2048, 00:24:12.721 "data_size": 63488 00:24:12.721 }, 00:24:12.721 { 00:24:12.721 "name": "BaseBdev4", 00:24:12.721 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:12.721 "is_configured": true, 00:24:12.721 "data_offset": 2048, 00:24:12.721 "data_size": 63488 00:24:12.721 } 00:24:12.721 ] 00:24:12.721 }' 00:24:12.721 21:46:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:12.721 21:46:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:12.721 21:46:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:12.721 21:46:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:12.721 21:46:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:13.657 21:46:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:13.916 "name": "raid_bdev1", 00:24:13.916 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:13.916 "strip_size_kb": 64, 00:24:13.916 "state": "online", 00:24:13.916 "raid_level": "raid5f", 00:24:13.916 "superblock": true, 00:24:13.916 "num_base_bdevs": 4, 00:24:13.916 "num_base_bdevs_discovered": 4, 00:24:13.916 "num_base_bdevs_operational": 4, 00:24:13.916 "process": { 00:24:13.916 "type": "rebuild", 00:24:13.916 "target": "spare", 00:24:13.916 "progress": { 00:24:13.916 "blocks": 122880, 00:24:13.916 "percent": 64 00:24:13.916 } 00:24:13.916 }, 00:24:13.916 "base_bdevs_list": [ 00:24:13.916 { 00:24:13.916 "name": "spare", 00:24:13.916 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:13.916 "is_configured": true, 00:24:13.916 "data_offset": 2048, 00:24:13.916 "data_size": 63488 00:24:13.916 }, 00:24:13.916 { 00:24:13.916 "name": "BaseBdev2", 00:24:13.916 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:13.916 "is_configured": true, 00:24:13.916 "data_offset": 2048, 00:24:13.916 "data_size": 63488 00:24:13.916 }, 00:24:13.916 { 00:24:13.916 "name": "BaseBdev3", 00:24:13.916 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:13.916 "is_configured": true, 00:24:13.916 "data_offset": 2048, 00:24:13.916 "data_size": 63488 00:24:13.916 }, 00:24:13.916 { 00:24:13.916 "name": "BaseBdev4", 00:24:13.916 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:13.916 "is_configured": true, 00:24:13.916 "data_offset": 2048, 00:24:13.916 "data_size": 63488 00:24:13.916 } 00:24:13.916 ] 00:24:13.916 }' 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:13.916 21:46:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.859 21:46:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:15.117 21:46:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:15.117 "name": "raid_bdev1", 00:24:15.118 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:15.118 "strip_size_kb": 64, 00:24:15.118 "state": "online", 00:24:15.118 "raid_level": "raid5f", 00:24:15.118 "superblock": true, 00:24:15.118 "num_base_bdevs": 4, 00:24:15.118 "num_base_bdevs_discovered": 4, 00:24:15.118 "num_base_bdevs_operational": 4, 00:24:15.118 "process": { 00:24:15.118 "type": "rebuild", 00:24:15.118 "target": "spare", 00:24:15.118 "progress": { 00:24:15.118 "blocks": 147840, 00:24:15.118 "percent": 77 00:24:15.118 } 00:24:15.118 }, 00:24:15.118 "base_bdevs_list": [ 00:24:15.118 { 00:24:15.118 "name": "spare", 00:24:15.118 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:15.118 "is_configured": true, 00:24:15.118 "data_offset": 2048, 00:24:15.118 "data_size": 63488 00:24:15.118 }, 00:24:15.118 { 00:24:15.118 "name": "BaseBdev2", 00:24:15.118 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:15.118 "is_configured": true, 00:24:15.118 "data_offset": 2048, 00:24:15.118 "data_size": 63488 00:24:15.118 }, 00:24:15.118 { 00:24:15.118 "name": "BaseBdev3", 00:24:15.118 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:15.118 "is_configured": true, 00:24:15.118 "data_offset": 2048, 00:24:15.118 "data_size": 63488 00:24:15.118 }, 00:24:15.118 { 00:24:15.118 "name": "BaseBdev4", 00:24:15.118 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:15.118 "is_configured": true, 00:24:15.118 "data_offset": 2048, 00:24:15.118 "data_size": 63488 00:24:15.118 } 00:24:15.118 ] 00:24:15.118 }' 00:24:15.118 21:46:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:15.118 21:46:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:15.118 21:46:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:15.118 21:46:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:15.118 21:46:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:16.495 "name": "raid_bdev1", 00:24:16.495 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:16.495 "strip_size_kb": 64, 00:24:16.495 "state": "online", 00:24:16.495 "raid_level": "raid5f", 00:24:16.495 "superblock": true, 00:24:16.495 "num_base_bdevs": 4, 00:24:16.495 "num_base_bdevs_discovered": 4, 00:24:16.495 "num_base_bdevs_operational": 4, 00:24:16.495 "process": { 00:24:16.495 "type": "rebuild", 00:24:16.495 "target": "spare", 00:24:16.495 "progress": { 00:24:16.495 "blocks": 170880, 00:24:16.495 "percent": 89 00:24:16.495 } 00:24:16.495 }, 00:24:16.495 "base_bdevs_list": [ 00:24:16.495 { 00:24:16.495 "name": "spare", 00:24:16.495 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev2", 00:24:16.495 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev3", 00:24:16.495 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 }, 00:24:16.495 { 00:24:16.495 "name": "BaseBdev4", 00:24:16.495 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:16.495 "is_configured": true, 00:24:16.495 "data_offset": 2048, 00:24:16.495 "data_size": 63488 00:24:16.495 } 00:24:16.495 ] 00:24:16.495 }' 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:16.495 21:46:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:17.431 [2024-12-06 21:46:37.836863] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:17.432 [2024-12-06 21:46:37.836952] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:17.432 [2024-12-06 21:46:37.837126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.432 21:46:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.691 "name": "raid_bdev1", 00:24:17.691 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:17.691 "strip_size_kb": 64, 00:24:17.691 "state": "online", 00:24:17.691 "raid_level": "raid5f", 00:24:17.691 "superblock": true, 00:24:17.691 "num_base_bdevs": 4, 00:24:17.691 "num_base_bdevs_discovered": 4, 00:24:17.691 "num_base_bdevs_operational": 4, 00:24:17.691 "base_bdevs_list": [ 00:24:17.691 { 00:24:17.691 "name": "spare", 00:24:17.691 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:17.691 "is_configured": true, 00:24:17.691 "data_offset": 2048, 00:24:17.691 "data_size": 63488 00:24:17.691 }, 00:24:17.691 { 00:24:17.691 "name": "BaseBdev2", 00:24:17.691 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:17.691 "is_configured": true, 00:24:17.691 "data_offset": 2048, 00:24:17.691 "data_size": 63488 00:24:17.691 }, 00:24:17.691 { 00:24:17.691 "name": "BaseBdev3", 00:24:17.691 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:17.691 "is_configured": true, 00:24:17.691 "data_offset": 2048, 00:24:17.691 "data_size": 63488 00:24:17.691 }, 00:24:17.691 { 00:24:17.691 "name": "BaseBdev4", 00:24:17.691 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:17.691 "is_configured": true, 00:24:17.691 "data_offset": 2048, 00:24:17.691 "data_size": 63488 00:24:17.691 } 00:24:17.691 ] 00:24:17.691 }' 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@660 -- # break 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.691 21:46:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:17.950 "name": "raid_bdev1", 00:24:17.950 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:17.950 "strip_size_kb": 64, 00:24:17.950 "state": "online", 00:24:17.950 "raid_level": "raid5f", 00:24:17.950 "superblock": true, 00:24:17.950 "num_base_bdevs": 4, 00:24:17.950 "num_base_bdevs_discovered": 4, 00:24:17.950 "num_base_bdevs_operational": 4, 00:24:17.950 "base_bdevs_list": [ 00:24:17.950 { 00:24:17.950 "name": "spare", 00:24:17.950 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:17.950 "is_configured": true, 00:24:17.950 "data_offset": 2048, 00:24:17.950 "data_size": 63488 00:24:17.950 }, 00:24:17.950 { 00:24:17.950 "name": "BaseBdev2", 00:24:17.950 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:17.950 "is_configured": true, 00:24:17.950 "data_offset": 2048, 00:24:17.950 "data_size": 63488 00:24:17.950 }, 00:24:17.950 { 00:24:17.950 "name": "BaseBdev3", 00:24:17.950 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:17.950 "is_configured": true, 00:24:17.950 "data_offset": 2048, 00:24:17.950 "data_size": 63488 00:24:17.950 }, 00:24:17.950 { 00:24:17.950 "name": "BaseBdev4", 00:24:17.950 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:17.950 "is_configured": true, 00:24:17.950 "data_offset": 2048, 00:24:17.950 "data_size": 63488 00:24:17.950 } 00:24:17.950 ] 00:24:17.950 }' 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:17.950 21:46:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:18.210 21:46:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.210 "name": "raid_bdev1", 00:24:18.210 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:18.210 "strip_size_kb": 64, 00:24:18.210 "state": "online", 00:24:18.210 "raid_level": "raid5f", 00:24:18.210 "superblock": true, 00:24:18.210 "num_base_bdevs": 4, 00:24:18.210 "num_base_bdevs_discovered": 4, 00:24:18.210 "num_base_bdevs_operational": 4, 00:24:18.210 "base_bdevs_list": [ 00:24:18.210 { 00:24:18.210 "name": "spare", 00:24:18.210 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:18.210 "is_configured": true, 00:24:18.210 "data_offset": 2048, 00:24:18.210 "data_size": 63488 00:24:18.210 }, 00:24:18.210 { 00:24:18.210 "name": "BaseBdev2", 00:24:18.210 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:18.210 "is_configured": true, 00:24:18.210 "data_offset": 2048, 00:24:18.210 "data_size": 63488 00:24:18.210 }, 00:24:18.210 { 00:24:18.210 "name": "BaseBdev3", 00:24:18.210 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:18.210 "is_configured": true, 00:24:18.210 "data_offset": 2048, 00:24:18.210 "data_size": 63488 00:24:18.210 }, 00:24:18.210 { 00:24:18.210 "name": "BaseBdev4", 00:24:18.210 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:18.210 "is_configured": true, 00:24:18.210 "data_offset": 2048, 00:24:18.210 "data_size": 63488 00:24:18.210 } 00:24:18.210 ] 00:24:18.210 }' 00:24:18.210 21:46:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.210 21:46:38 -- common/autotest_common.sh@10 -- # set +x 00:24:18.469 21:46:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:18.728 [2024-12-06 21:46:39.091863] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:18.728 [2024-12-06 21:46:39.091917] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:18.728 [2024-12-06 21:46:39.091997] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:18.728 [2024-12-06 21:46:39.092132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:18.728 [2024-12-06 21:46:39.092157] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000a580 name raid_bdev1, state offline 00:24:18.728 21:46:39 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.728 21:46:39 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:18.987 21:46:39 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:18.987 21:46:39 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:18.987 21:46:39 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@12 -- # local i 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:18.987 21:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:19.246 /dev/nbd0 00:24:19.246 21:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:19.246 21:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:19.246 21:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:24:19.246 21:46:39 -- common/autotest_common.sh@867 -- # local i 00:24:19.246 21:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:19.246 21:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:19.246 21:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:24:19.246 21:46:39 -- common/autotest_common.sh@871 -- # break 00:24:19.246 21:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:19.246 21:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:19.246 21:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.246 1+0 records in 00:24:19.246 1+0 records out 00:24:19.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00178081 s, 2.3 MB/s 00:24:19.246 21:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.246 21:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:24:19.246 21:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.246 21:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:19.246 21:46:39 -- common/autotest_common.sh@887 -- # return 0 00:24:19.246 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.246 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.246 21:46:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:19.505 /dev/nbd1 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:19.505 21:46:39 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:24:19.505 21:46:39 -- common/autotest_common.sh@867 -- # local i 00:24:19.505 21:46:39 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:24:19.505 21:46:39 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:24:19.505 21:46:39 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:24:19.505 21:46:39 -- common/autotest_common.sh@871 -- # break 00:24:19.505 21:46:39 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:24:19.505 21:46:39 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:24:19.505 21:46:39 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:19.505 1+0 records in 00:24:19.505 1+0 records out 00:24:19.505 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340761 s, 12.0 MB/s 00:24:19.505 21:46:39 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.505 21:46:39 -- common/autotest_common.sh@884 -- # size=4096 00:24:19.505 21:46:39 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:19.505 21:46:39 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:24:19.505 21:46:39 -- common/autotest_common.sh@887 -- # return 0 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:19.505 21:46:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:19.505 21:46:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@51 -- # local i 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.505 21:46:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@41 -- # break 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@45 -- # return 0 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:19.764 21:46:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@41 -- # break 00:24:20.023 21:46:40 -- bdev/nbd_common.sh@45 -- # return 0 00:24:20.023 21:46:40 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:20.023 21:46:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:20.023 21:46:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:20.023 21:46:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:20.282 21:46:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:20.542 [2024-12-06 21:46:40.934798] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:20.542 [2024-12-06 21:46:40.934877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.542 [2024-12-06 21:46:40.934912] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480 00:24:20.542 [2024-12-06 21:46:40.934926] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.542 [2024-12-06 21:46:40.937160] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.542 [2024-12-06 21:46:40.937199] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:20.542 [2024-12-06 21:46:40.937311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:20.542 [2024-12-06 21:46:40.937367] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:20.542 BaseBdev1 00:24:20.542 21:46:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:20.542 21:46:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:20.542 21:46:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:20.801 21:46:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:20.801 [2024-12-06 21:46:41.286834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:20.801 [2024-12-06 21:46:41.286898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:20.801 [2024-12-06 21:46:41.286939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000bd80 00:24:20.801 [2024-12-06 21:46:41.286955] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:20.801 [2024-12-06 21:46:41.287378] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:20.801 [2024-12-06 21:46:41.287411] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:20.801 [2024-12-06 21:46:41.287526] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:20.801 [2024-12-06 21:46:41.287544] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:20.801 [2024-12-06 21:46:41.287556] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:20.801 [2024-12-06 21:46:41.287576] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000ba80 name raid_bdev1, state configuring 00:24:20.802 [2024-12-06 21:46:41.287662] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:20.802 BaseBdev2 00:24:21.060 21:46:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:21.060 21:46:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:21.060 21:46:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:21.060 21:46:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:21.319 [2024-12-06 21:46:41.655360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:21.319 [2024-12-06 21:46:41.655449] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.319 [2024-12-06 21:46:41.655500] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c380 00:24:21.319 [2024-12-06 21:46:41.655516] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.319 [2024-12-06 21:46:41.656036] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.319 [2024-12-06 21:46:41.656078] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:21.319 [2024-12-06 21:46:41.656170] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:21.319 [2024-12-06 21:46:41.656205] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:21.319 BaseBdev3 00:24:21.319 21:46:41 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:21.319 21:46:41 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:24:21.319 21:46:41 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:24:21.578 21:46:41 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:24:21.578 [2024-12-06 21:46:42.031473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:24:21.578 [2024-12-06 21:46:42.031557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:21.578 [2024-12-06 21:46:42.031589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c680 00:24:21.578 [2024-12-06 21:46:42.031606] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:21.578 [2024-12-06 21:46:42.032098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:21.578 [2024-12-06 21:46:42.032135] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:24:21.578 [2024-12-06 21:46:42.032237] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:24:21.578 [2024-12-06 21:46:42.032287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:21.578 BaseBdev4 00:24:21.578 21:46:42 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:21.838 21:46:42 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:22.097 [2024-12-06 21:46:42.387527] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:22.097 [2024-12-06 21:46:42.387601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:22.097 [2024-12-06 21:46:42.387634] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000c980 00:24:22.097 [2024-12-06 21:46:42.387650] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:22.097 [2024-12-06 21:46:42.388171] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:22.097 [2024-12-06 21:46:42.388283] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:22.097 [2024-12-06 21:46:42.388389] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:22.097 [2024-12-06 21:46:42.388434] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:22.097 spare 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.097 21:46:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.097 [2024-12-06 21:46:42.488608] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x51600000c080 00:24:22.097 [2024-12-06 21:46:42.488686] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:22.097 [2024-12-06 21:46:42.488807] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x50d000048a80 00:24:22.097 [2024-12-06 21:46:42.494021] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x51600000c080 00:24:22.098 [2024-12-06 21:46:42.494047] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x51600000c080 00:24:22.098 [2024-12-06 21:46:42.494236] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:22.356 21:46:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:22.356 "name": "raid_bdev1", 00:24:22.356 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:22.356 "strip_size_kb": 64, 00:24:22.356 "state": "online", 00:24:22.356 "raid_level": "raid5f", 00:24:22.356 "superblock": true, 00:24:22.356 "num_base_bdevs": 4, 00:24:22.356 "num_base_bdevs_discovered": 4, 00:24:22.356 "num_base_bdevs_operational": 4, 00:24:22.356 "base_bdevs_list": [ 00:24:22.356 { 00:24:22.356 "name": "spare", 00:24:22.356 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:22.356 "is_configured": true, 00:24:22.356 "data_offset": 2048, 00:24:22.356 "data_size": 63488 00:24:22.356 }, 00:24:22.356 { 00:24:22.356 "name": "BaseBdev2", 00:24:22.356 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:22.356 "is_configured": true, 00:24:22.356 "data_offset": 2048, 00:24:22.356 "data_size": 63488 00:24:22.356 }, 00:24:22.356 { 00:24:22.356 "name": "BaseBdev3", 00:24:22.356 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:22.356 "is_configured": true, 00:24:22.356 "data_offset": 2048, 00:24:22.356 "data_size": 63488 00:24:22.356 }, 00:24:22.356 { 00:24:22.356 "name": "BaseBdev4", 00:24:22.356 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:22.356 "is_configured": true, 00:24:22.356 "data_offset": 2048, 00:24:22.356 "data_size": 63488 00:24:22.356 } 00:24:22.356 ] 00:24:22.356 }' 00:24:22.356 21:46:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:22.356 21:46:42 -- common/autotest_common.sh@10 -- # set +x 00:24:22.615 21:46:42 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.616 21:46:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:22.616 "name": "raid_bdev1", 00:24:22.616 "uuid": "1520f2d6-9ddd-46a4-abaa-c25fd01ae0ae", 00:24:22.616 "strip_size_kb": 64, 00:24:22.616 "state": "online", 00:24:22.616 "raid_level": "raid5f", 00:24:22.616 "superblock": true, 00:24:22.616 "num_base_bdevs": 4, 00:24:22.616 "num_base_bdevs_discovered": 4, 00:24:22.616 "num_base_bdevs_operational": 4, 00:24:22.616 "base_bdevs_list": [ 00:24:22.616 { 00:24:22.616 "name": "spare", 00:24:22.616 "uuid": "79f31a6a-88be-5d13-bb06-28dd731d54a2", 00:24:22.616 "is_configured": true, 00:24:22.616 "data_offset": 2048, 00:24:22.616 "data_size": 63488 00:24:22.616 }, 00:24:22.616 { 00:24:22.616 "name": "BaseBdev2", 00:24:22.616 "uuid": "ac569b85-551e-53fb-8b19-28db34e4a5ba", 00:24:22.616 "is_configured": true, 00:24:22.616 "data_offset": 2048, 00:24:22.616 "data_size": 63488 00:24:22.616 }, 00:24:22.616 { 00:24:22.616 "name": "BaseBdev3", 00:24:22.616 "uuid": "a7c233c4-aa49-58ec-bb40-01693bbd53f0", 00:24:22.616 "is_configured": true, 00:24:22.616 "data_offset": 2048, 00:24:22.616 "data_size": 63488 00:24:22.616 }, 00:24:22.616 { 00:24:22.616 "name": "BaseBdev4", 00:24:22.616 "uuid": "cc0c58a3-a64b-5d1f-8dcc-7ba1431eeb7f", 00:24:22.616 "is_configured": true, 00:24:22.616 "data_offset": 2048, 00:24:22.616 "data_size": 63488 00:24:22.616 } 00:24:22.616 ] 00:24:22.616 }' 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.616 21:46:43 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:22.875 21:46:43 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:22.875 21:46:43 -- bdev/bdev_raid.sh@709 -- # killprocess 86317 00:24:22.875 21:46:43 -- common/autotest_common.sh@936 -- # '[' -z 86317 ']' 00:24:22.875 21:46:43 -- common/autotest_common.sh@940 -- # kill -0 86317 00:24:22.875 21:46:43 -- common/autotest_common.sh@941 -- # uname 00:24:22.875 21:46:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:22.875 21:46:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86317 00:24:23.134 21:46:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.134 21:46:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.134 killing process with pid 86317 00:24:23.134 21:46:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86317' 00:24:23.134 Received shutdown signal, test time was about 60.000000 seconds 00:24:23.134 00:24:23.134 Latency(us) 00:24:23.134 [2024-12-06T21:46:43.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.134 [2024-12-06T21:46:43.631Z] =================================================================================================================== 00:24:23.134 [2024-12-06T21:46:43.631Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:23.134 21:46:43 -- common/autotest_common.sh@955 -- # kill 86317 00:24:23.134 [2024-12-06 21:46:43.383894] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:23.134 21:46:43 -- common/autotest_common.sh@960 -- # wait 86317 00:24:23.134 [2024-12-06 21:46:43.384014] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:23.134 [2024-12-06 21:46:43.384113] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:23.134 [2024-12-06 21:46:43.384132] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x51600000c080 name raid_bdev1, state offline 00:24:23.393 [2024-12-06 21:46:43.696660] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:24.330 21:46:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:24.330 00:24:24.330 real 0m25.570s 00:24:24.330 user 0m36.616s 00:24:24.330 sys 0m3.025s 00:24:24.330 21:46:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:24.330 ************************************ 00:24:24.330 END TEST raid5f_rebuild_test_sb 00:24:24.330 ************************************ 00:24:24.330 21:46:44 -- common/autotest_common.sh@10 -- # set +x 00:24:24.330 21:46:44 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:24:24.330 00:24:24.330 real 10m31.590s 00:24:24.330 user 16m18.162s 00:24:24.330 sys 1m33.619s 00:24:24.330 21:46:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:24.330 ************************************ 00:24:24.330 21:46:44 -- common/autotest_common.sh@10 -- # set +x 00:24:24.330 END TEST bdev_raid 00:24:24.330 ************************************ 00:24:24.330 21:46:44 -- spdk/autotest.sh@184 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:24.330 21:46:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:24.330 21:46:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:24.330 21:46:44 -- common/autotest_common.sh@10 -- # set +x 00:24:24.330 ************************************ 00:24:24.330 START TEST bdevperf_config 00:24:24.330 ************************************ 00:24:24.330 21:46:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:24:24.330 * Looking for test storage... 00:24:24.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:24:24.330 21:46:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:24.330 21:46:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:24.330 21:46:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:24.591 21:46:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:24.591 21:46:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:24.591 21:46:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:24.591 21:46:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:24.591 21:46:44 -- scripts/common.sh@335 -- # IFS=.-: 00:24:24.591 21:46:44 -- scripts/common.sh@335 -- # read -ra ver1 00:24:24.591 21:46:44 -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.591 21:46:44 -- scripts/common.sh@336 -- # read -ra ver2 00:24:24.591 21:46:44 -- scripts/common.sh@337 -- # local 'op=<' 00:24:24.591 21:46:44 -- scripts/common.sh@339 -- # ver1_l=2 00:24:24.591 21:46:44 -- scripts/common.sh@340 -- # ver2_l=1 00:24:24.591 21:46:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:24.591 21:46:44 -- scripts/common.sh@343 -- # case "$op" in 00:24:24.591 21:46:44 -- scripts/common.sh@344 -- # : 1 00:24:24.591 21:46:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:24.591 21:46:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.591 21:46:44 -- scripts/common.sh@364 -- # decimal 1 00:24:24.591 21:46:44 -- scripts/common.sh@352 -- # local d=1 00:24:24.591 21:46:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.591 21:46:44 -- scripts/common.sh@354 -- # echo 1 00:24:24.591 21:46:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:24.591 21:46:44 -- scripts/common.sh@365 -- # decimal 2 00:24:24.591 21:46:44 -- scripts/common.sh@352 -- # local d=2 00:24:24.591 21:46:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.591 21:46:44 -- scripts/common.sh@354 -- # echo 2 00:24:24.591 21:46:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:24.591 21:46:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:24.591 21:46:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:24.591 21:46:44 -- scripts/common.sh@367 -- # return 0 00:24:24.591 21:46:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.591 21:46:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.591 --rc genhtml_branch_coverage=1 00:24:24.591 --rc genhtml_function_coverage=1 00:24:24.591 --rc genhtml_legend=1 00:24:24.591 --rc geninfo_all_blocks=1 00:24:24.591 --rc geninfo_unexecuted_blocks=1 00:24:24.591 00:24:24.591 ' 00:24:24.591 21:46:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.591 --rc genhtml_branch_coverage=1 00:24:24.591 --rc genhtml_function_coverage=1 00:24:24.591 --rc genhtml_legend=1 00:24:24.591 --rc geninfo_all_blocks=1 00:24:24.591 --rc geninfo_unexecuted_blocks=1 00:24:24.591 00:24:24.591 ' 00:24:24.591 21:46:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.591 --rc genhtml_branch_coverage=1 00:24:24.591 --rc genhtml_function_coverage=1 00:24:24.591 --rc genhtml_legend=1 00:24:24.591 --rc geninfo_all_blocks=1 00:24:24.591 --rc geninfo_unexecuted_blocks=1 00:24:24.591 00:24:24.591 ' 00:24:24.591 21:46:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.591 --rc genhtml_branch_coverage=1 00:24:24.591 --rc genhtml_function_coverage=1 00:24:24.591 --rc genhtml_legend=1 00:24:24.591 --rc geninfo_all_blocks=1 00:24:24.591 --rc geninfo_unexecuted_blocks=1 00:24:24.591 00:24:24.591 ' 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:24:24.591 21:46:44 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:24:24.591 21:46:44 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:24.591 21:46:44 -- bdevperf/common.sh@9 -- # local rw=read 00:24:24.591 21:46:44 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:24.591 21:46:44 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:24.591 21:46:44 -- bdevperf/common.sh@13 -- # cat 00:24:24.591 21:46:44 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:24.591 00:24:24.591 21:46:44 -- bdevperf/common.sh@19 -- # echo 00:24:24.591 21:46:44 -- bdevperf/common.sh@20 -- # cat 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@18 -- # create_job job0 00:24:24.591 21:46:44 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:24.591 21:46:44 -- bdevperf/common.sh@9 -- # local rw= 00:24:24.591 21:46:44 -- bdevperf/common.sh@10 -- # local filename= 00:24:24.591 21:46:44 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:24.591 21:46:44 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:24.591 00:24:24.591 21:46:44 -- bdevperf/common.sh@19 -- # echo 00:24:24.591 21:46:44 -- bdevperf/common.sh@20 -- # cat 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@19 -- # create_job job1 00:24:24.591 21:46:44 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:24.591 21:46:44 -- bdevperf/common.sh@9 -- # local rw= 00:24:24.591 21:46:44 -- bdevperf/common.sh@10 -- # local filename= 00:24:24.591 21:46:44 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:24.591 21:46:44 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:24.591 00:24:24.591 21:46:44 -- bdevperf/common.sh@19 -- # echo 00:24:24.591 21:46:44 -- bdevperf/common.sh@20 -- # cat 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@20 -- # create_job job2 00:24:24.591 21:46:44 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:24.591 21:46:44 -- bdevperf/common.sh@9 -- # local rw= 00:24:24.591 21:46:44 -- bdevperf/common.sh@10 -- # local filename= 00:24:24.591 21:46:44 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:24.591 00:24:24.591 21:46:44 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:24.591 21:46:44 -- bdevperf/common.sh@19 -- # echo 00:24:24.591 21:46:44 -- bdevperf/common.sh@20 -- # cat 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@21 -- # create_job job3 00:24:24.591 21:46:44 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:24.591 21:46:44 -- bdevperf/common.sh@9 -- # local rw= 00:24:24.591 21:46:44 -- bdevperf/common.sh@10 -- # local filename= 00:24:24.591 21:46:44 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:24.591 21:46:44 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:24.591 00:24:24.591 21:46:44 -- bdevperf/common.sh@19 -- # echo 00:24:24.591 21:46:44 -- bdevperf/common.sh@20 -- # cat 00:24:24.591 21:46:44 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:28.781 21:46:48 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-12-06 21:46:44.952766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.781 [2024-12-06 21:46:44.952940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87014 ] 00:24:28.781 Using job config with 4 jobs 00:24:28.781 [2024-12-06 21:46:45.122380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.781 [2024-12-06 21:46:45.288982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.781 cpumask for '\''job0'\'' is too big 00:24:28.782 cpumask for '\''job1'\'' is too big 00:24:28.782 cpumask for '\''job2'\'' is too big 00:24:28.782 cpumask for '\''job3'\'' is too big 00:24:28.782 Running I/O for 2 seconds... 00:24:28.782 00:24:28.782 Latency(us) 00:24:28.782 [2024-12-06T21:46:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.01 31339.47 30.60 0.00 0.00 8162.45 1489.45 12809.31 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31334.52 30.60 0.00 0.00 8148.93 1429.88 11319.85 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31309.12 30.58 0.00 0.00 8140.89 1452.22 10664.49 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31284.31 30.55 0.00 0.00 8132.38 1444.77 10247.45 00:24:28.782 [2024-12-06T21:46:49.279Z] =================================================================================================================== 00:24:28.782 [2024-12-06T21:46:49.279Z] Total : 125267.42 122.33 0.00 0.00 8146.14 1429.88 12809.31' 00:24:28.782 21:46:48 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-12-06 21:46:44.952766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.782 [2024-12-06 21:46:44.952940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87014 ] 00:24:28.782 Using job config with 4 jobs 00:24:28.782 [2024-12-06 21:46:45.122380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.782 [2024-12-06 21:46:45.288982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.782 cpumask for '\''job0'\'' is too big 00:24:28.782 cpumask for '\''job1'\'' is too big 00:24:28.782 cpumask for '\''job2'\'' is too big 00:24:28.782 cpumask for '\''job3'\'' is too big 00:24:28.782 Running I/O for 2 seconds... 00:24:28.782 00:24:28.782 Latency(us) 00:24:28.782 [2024-12-06T21:46:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.01 31339.47 30.60 0.00 0.00 8162.45 1489.45 12809.31 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31334.52 30.60 0.00 0.00 8148.93 1429.88 11319.85 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31309.12 30.58 0.00 0.00 8140.89 1452.22 10664.49 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31284.31 30.55 0.00 0.00 8132.38 1444.77 10247.45 00:24:28.782 [2024-12-06T21:46:49.279Z] =================================================================================================================== 00:24:28.782 [2024-12-06T21:46:49.279Z] Total : 125267.42 122.33 0.00 0.00 8146.14 1429.88 12809.31' 00:24:28.782 21:46:48 -- bdevperf/common.sh@32 -- # echo '[2024-12-06 21:46:44.952766] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.782 [2024-12-06 21:46:44.952940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87014 ] 00:24:28.782 Using job config with 4 jobs 00:24:28.782 [2024-12-06 21:46:45.122380] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.782 [2024-12-06 21:46:45.288982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:28.782 cpumask for '\''job0'\'' is too big 00:24:28.782 cpumask for '\''job1'\'' is too big 00:24:28.782 cpumask for '\''job2'\'' is too big 00:24:28.782 cpumask for '\''job3'\'' is too big 00:24:28.782 Running I/O for 2 seconds... 00:24:28.782 00:24:28.782 Latency(us) 00:24:28.782 [2024-12-06T21:46:49.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.01 31339.47 30.60 0.00 0.00 8162.45 1489.45 12809.31 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31334.52 30.60 0.00 0.00 8148.93 1429.88 11319.85 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31309.12 30.58 0.00 0.00 8140.89 1452.22 10664.49 00:24:28.782 [2024-12-06T21:46:49.279Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:28.782 Malloc0 : 2.02 31284.31 30.55 0.00 0.00 8132.38 1444.77 10247.45 00:24:28.782 [2024-12-06T21:46:49.279Z] =================================================================================================================== 00:24:28.782 [2024-12-06T21:46:49.279Z] Total : 125267.42 122.33 0.00 0.00 8146.14 1429.88 12809.31' 00:24:28.782 21:46:48 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:28.782 21:46:48 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:28.782 21:46:48 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:24:28.782 21:46:48 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:28.782 [2024-12-06 21:46:48.806713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:28.782 [2024-12-06 21:46:48.806848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87055 ] 00:24:28.782 [2024-12-06 21:46:48.961809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.782 [2024-12-06 21:46:49.123545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.041 cpumask for 'job0' is too big 00:24:29.041 cpumask for 'job1' is too big 00:24:29.041 cpumask for 'job2' is too big 00:24:29.041 cpumask for 'job3' is too big 00:24:32.328 21:46:52 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:24:32.328 Running I/O for 2 seconds... 00:24:32.328 00:24:32.328 Latency(us) 00:24:32.328 [2024-12-06T21:46:52.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.328 [2024-12-06T21:46:52.825Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:32.328 Malloc0 : 2.02 31362.40 30.63 0.00 0.00 8158.71 1459.67 12571.00 00:24:32.328 [2024-12-06T21:46:52.825Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:32.328 Malloc0 : 2.02 31340.94 30.61 0.00 0.00 8150.51 1400.09 11319.85 00:24:32.328 [2024-12-06T21:46:52.825Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:32.328 Malloc0 : 2.02 31320.79 30.59 0.00 0.00 8141.38 1437.32 11260.28 00:24:32.328 [2024-12-06T21:46:52.825Z] Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:24:32.328 Malloc0 : 2.02 31298.59 30.57 0.00 0.00 8133.17 1422.43 11796.48 00:24:32.328 [2024-12-06T21:46:52.825Z] =================================================================================================================== 00:24:32.328 [2024-12-06T21:46:52.825Z] Total : 125322.71 122.39 0.00 0.00 8145.94 1400.09 12571.00' 00:24:32.328 21:46:52 -- bdevperf/test_config.sh@27 -- # cleanup 00:24:32.328 21:46:52 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:32.328 21:46:52 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:24:32.328 21:46:52 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:32.328 21:46:52 -- bdevperf/common.sh@9 -- # local rw=write 00:24:32.328 21:46:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:32.328 21:46:52 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:32.328 21:46:52 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:32.328 00:24:32.328 21:46:52 -- bdevperf/common.sh@19 -- # echo 00:24:32.328 21:46:52 -- bdevperf/common.sh@20 -- # cat 00:24:32.328 21:46:52 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:24:32.328 21:46:52 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:32.328 21:46:52 -- bdevperf/common.sh@9 -- # local rw=write 00:24:32.328 21:46:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:32.328 21:46:52 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:32.328 21:46:52 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:32.328 00:24:32.328 21:46:52 -- bdevperf/common.sh@19 -- # echo 00:24:32.328 21:46:52 -- bdevperf/common.sh@20 -- # cat 00:24:32.328 21:46:52 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:24:32.328 21:46:52 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:32.329 21:46:52 -- bdevperf/common.sh@9 -- # local rw=write 00:24:32.329 21:46:52 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:24:32.329 21:46:52 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:32.329 21:46:52 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:32.329 00:24:32.329 21:46:52 -- bdevperf/common.sh@19 -- # echo 00:24:32.329 21:46:52 -- bdevperf/common.sh@20 -- # cat 00:24:32.329 21:46:52 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:36.514 21:46:56 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-12-06 21:46:52.656750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:36.514 [2024-12-06 21:46:52.656926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87106 ] 00:24:36.514 Using job config with 3 jobs 00:24:36.514 [2024-12-06 21:46:52.826903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.514 [2024-12-06 21:46:52.992734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.514 cpumask for '\''job0'\'' is too big 00:24:36.514 cpumask for '\''job1'\'' is too big 00:24:36.514 cpumask for '\''job2'\'' is too big 00:24:36.514 Running I/O for 2 seconds... 00:24:36.514 00:24:36.514 Latency(us) 00:24:36.514 [2024-12-06T21:46:57.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42581.59 41.58 0.00 0.00 6006.67 1400.09 9055.88 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42597.01 41.60 0.00 0.00 5995.16 1355.40 7923.90 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42569.46 41.57 0.00 0.00 5988.29 1392.64 7864.32 00:24:36.514 [2024-12-06T21:46:57.011Z] =================================================================================================================== 00:24:36.514 [2024-12-06T21:46:57.011Z] Total : 127748.05 124.75 0.00 0.00 5996.70 1355.40 9055.88' 00:24:36.514 21:46:56 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-12-06 21:46:52.656750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:36.514 [2024-12-06 21:46:52.656926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87106 ] 00:24:36.514 Using job config with 3 jobs 00:24:36.514 [2024-12-06 21:46:52.826903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.514 [2024-12-06 21:46:52.992734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.514 cpumask for '\''job0'\'' is too big 00:24:36.514 cpumask for '\''job1'\'' is too big 00:24:36.514 cpumask for '\''job2'\'' is too big 00:24:36.514 Running I/O for 2 seconds... 00:24:36.514 00:24:36.514 Latency(us) 00:24:36.514 [2024-12-06T21:46:57.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42581.59 41.58 0.00 0.00 6006.67 1400.09 9055.88 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42597.01 41.60 0.00 0.00 5995.16 1355.40 7923.90 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42569.46 41.57 0.00 0.00 5988.29 1392.64 7864.32 00:24:36.514 [2024-12-06T21:46:57.011Z] =================================================================================================================== 00:24:36.514 [2024-12-06T21:46:57.011Z] Total : 127748.05 124.75 0.00 0.00 5996.70 1355.40 9055.88' 00:24:36.514 21:46:56 -- bdevperf/common.sh@32 -- # echo '[2024-12-06 21:46:52.656750] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:36.514 [2024-12-06 21:46:52.656926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87106 ] 00:24:36.514 Using job config with 3 jobs 00:24:36.514 [2024-12-06 21:46:52.826903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.514 [2024-12-06 21:46:52.992734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.514 cpumask for '\''job0'\'' is too big 00:24:36.514 cpumask for '\''job1'\'' is too big 00:24:36.514 cpumask for '\''job2'\'' is too big 00:24:36.514 Running I/O for 2 seconds... 00:24:36.514 00:24:36.514 Latency(us) 00:24:36.514 [2024-12-06T21:46:57.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42581.59 41.58 0.00 0.00 6006.67 1400.09 9055.88 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42597.01 41.60 0.00 0.00 5995.16 1355.40 7923.90 00:24:36.514 [2024-12-06T21:46:57.011Z] Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:24:36.514 Malloc0 : 2.01 42569.46 41.57 0.00 0.00 5988.29 1392.64 7864.32 00:24:36.514 [2024-12-06T21:46:57.011Z] =================================================================================================================== 00:24:36.514 [2024-12-06T21:46:57.011Z] Total : 127748.05 124.75 0.00 0.00 5996.70 1355.40 9055.88' 00:24:36.514 21:46:56 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:36.514 21:46:56 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:36.514 21:46:56 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:24:36.514 21:46:56 -- bdevperf/test_config.sh@35 -- # cleanup 00:24:36.514 21:46:56 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:36.514 21:46:56 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:24:36.515 21:46:56 -- bdevperf/common.sh@8 -- # local job_section=global 00:24:36.515 21:46:56 -- bdevperf/common.sh@9 -- # local rw=rw 00:24:36.515 21:46:56 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:24:36.515 21:46:56 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:24:36.515 21:46:56 -- bdevperf/common.sh@13 -- # cat 00:24:36.515 00:24:36.515 21:46:56 -- bdevperf/common.sh@18 -- # job='[global]' 00:24:36.515 21:46:56 -- bdevperf/common.sh@19 -- # echo 00:24:36.515 21:46:56 -- bdevperf/common.sh@20 -- # cat 00:24:36.515 00:24:36.515 21:46:56 -- bdevperf/test_config.sh@38 -- # create_job job0 00:24:36.515 21:46:56 -- bdevperf/common.sh@8 -- # local job_section=job0 00:24:36.515 21:46:56 -- bdevperf/common.sh@9 -- # local rw= 00:24:36.515 21:46:56 -- bdevperf/common.sh@10 -- # local filename= 00:24:36.515 21:46:56 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:24:36.515 21:46:56 -- bdevperf/common.sh@18 -- # job='[job0]' 00:24:36.515 21:46:56 -- bdevperf/common.sh@19 -- # echo 00:24:36.515 21:46:56 -- bdevperf/common.sh@20 -- # cat 00:24:36.515 00:24:36.515 21:46:56 -- bdevperf/test_config.sh@39 -- # create_job job1 00:24:36.515 21:46:56 -- bdevperf/common.sh@8 -- # local job_section=job1 00:24:36.515 21:46:56 -- bdevperf/common.sh@9 -- # local rw= 00:24:36.515 21:46:56 -- bdevperf/common.sh@10 -- # local filename= 00:24:36.515 21:46:56 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:24:36.515 21:46:56 -- bdevperf/common.sh@18 -- # job='[job1]' 00:24:36.515 21:46:56 -- bdevperf/common.sh@19 -- # echo 00:24:36.515 21:46:56 -- bdevperf/common.sh@20 -- # cat 00:24:36.515 21:46:56 -- bdevperf/test_config.sh@40 -- # create_job job2 00:24:36.515 21:46:56 -- bdevperf/common.sh@8 -- # local job_section=job2 00:24:36.515 21:46:56 -- bdevperf/common.sh@9 -- # local rw= 00:24:36.515 21:46:56 -- bdevperf/common.sh@10 -- # local filename= 00:24:36.515 21:46:56 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:24:36.515 21:46:56 -- bdevperf/common.sh@18 -- # job='[job2]' 00:24:36.515 00:24:36.515 21:46:56 -- bdevperf/common.sh@19 -- # echo 00:24:36.515 21:46:56 -- bdevperf/common.sh@20 -- # cat 00:24:36.515 00:24:36.515 21:46:56 -- bdevperf/test_config.sh@41 -- # create_job job3 00:24:36.515 21:46:56 -- bdevperf/common.sh@8 -- # local job_section=job3 00:24:36.515 21:46:56 -- bdevperf/common.sh@9 -- # local rw= 00:24:36.515 21:46:56 -- bdevperf/common.sh@10 -- # local filename= 00:24:36.515 21:46:56 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:24:36.515 21:46:56 -- bdevperf/common.sh@18 -- # job='[job3]' 00:24:36.515 21:46:56 -- bdevperf/common.sh@19 -- # echo 00:24:36.515 21:46:56 -- bdevperf/common.sh@20 -- # cat 00:24:36.515 21:46:56 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:40.709 21:47:00 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-12-06 21:46:56.542176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:40.709 [2024-12-06 21:46:56.542339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87161 ] 00:24:40.709 Using job config with 4 jobs 00:24:40.709 [2024-12-06 21:46:56.709066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.709 [2024-12-06 21:46:56.868602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.709 cpumask for '\''job0'\'' is too big 00:24:40.709 cpumask for '\''job1'\'' is too big 00:24:40.709 cpumask for '\''job2'\'' is too big 00:24:40.709 cpumask for '\''job3'\'' is too big 00:24:40.709 Running I/O for 2 seconds... 00:24:40.709 00:24:40.709 Latency(us) 00:24:40.709 [2024-12-06T21:47:01.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.02 15428.38 15.07 0.00 0.00 16588.11 3142.75 25856.93 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15417.66 15.06 0.00 0.00 16586.56 3753.43 25737.77 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15407.43 15.05 0.00 0.00 16549.00 2993.80 22639.71 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15396.77 15.04 0.00 0.00 16544.87 3708.74 22520.55 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15386.88 15.03 0.00 0.00 16510.10 3008.70 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15455.24 15.09 0.00 0.00 16424.35 3634.27 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.04 15445.28 15.08 0.00 0.00 16390.04 3008.70 19541.64 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15434.69 15.07 0.00 0.00 16389.62 3485.32 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] =================================================================================================================== 00:24:40.709 [2024-12-06T21:47:01.206Z] Total : 123372.32 120.48 0.00 0.00 16497.53 2993.80 25856.93' 00:24:40.709 21:47:00 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-12-06 21:46:56.542176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:40.709 [2024-12-06 21:46:56.542339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87161 ] 00:24:40.709 Using job config with 4 jobs 00:24:40.709 [2024-12-06 21:46:56.709066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.709 [2024-12-06 21:46:56.868602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.709 cpumask for '\''job0'\'' is too big 00:24:40.709 cpumask for '\''job1'\'' is too big 00:24:40.709 cpumask for '\''job2'\'' is too big 00:24:40.709 cpumask for '\''job3'\'' is too big 00:24:40.709 Running I/O for 2 seconds... 00:24:40.709 00:24:40.709 Latency(us) 00:24:40.709 [2024-12-06T21:47:01.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.02 15428.38 15.07 0.00 0.00 16588.11 3142.75 25856.93 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15417.66 15.06 0.00 0.00 16586.56 3753.43 25737.77 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15407.43 15.05 0.00 0.00 16549.00 2993.80 22639.71 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15396.77 15.04 0.00 0.00 16544.87 3708.74 22520.55 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15386.88 15.03 0.00 0.00 16510.10 3008.70 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15455.24 15.09 0.00 0.00 16424.35 3634.27 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.04 15445.28 15.08 0.00 0.00 16390.04 3008.70 19541.64 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15434.69 15.07 0.00 0.00 16389.62 3485.32 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] =================================================================================================================== 00:24:40.709 [2024-12-06T21:47:01.206Z] Total : 123372.32 120.48 0.00 0.00 16497.53 2993.80 25856.93' 00:24:40.709 21:47:00 -- bdevperf/common.sh@32 -- # echo '[2024-12-06 21:46:56.542176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:40.709 [2024-12-06 21:46:56.542339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87161 ] 00:24:40.709 Using job config with 4 jobs 00:24:40.709 [2024-12-06 21:46:56.709066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.709 [2024-12-06 21:46:56.868602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.709 cpumask for '\''job0'\'' is too big 00:24:40.709 cpumask for '\''job1'\'' is too big 00:24:40.709 cpumask for '\''job2'\'' is too big 00:24:40.709 cpumask for '\''job3'\'' is too big 00:24:40.709 Running I/O for 2 seconds... 00:24:40.709 00:24:40.709 Latency(us) 00:24:40.709 [2024-12-06T21:47:01.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.02 15428.38 15.07 0.00 0.00 16588.11 3142.75 25856.93 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15417.66 15.06 0.00 0.00 16586.56 3753.43 25737.77 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15407.43 15.05 0.00 0.00 16549.00 2993.80 22639.71 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.03 15396.77 15.04 0.00 0.00 16544.87 3708.74 22520.55 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.03 15386.88 15.03 0.00 0.00 16510.10 3008.70 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15455.24 15.09 0.00 0.00 16424.35 3634.27 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc0 : 2.04 15445.28 15.08 0.00 0.00 16390.04 3008.70 19541.64 00:24:40.709 [2024-12-06T21:47:01.206Z] Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:24:40.709 Malloc1 : 2.04 15434.69 15.07 0.00 0.00 16389.62 3485.32 19660.80 00:24:40.709 [2024-12-06T21:47:01.206Z] =================================================================================================================== 00:24:40.709 [2024-12-06T21:47:01.206Z] Total : 123372.32 120.48 0.00 0.00 16497.53 2993.80 25856.93' 00:24:40.709 21:47:00 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:24:40.709 21:47:00 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:24:40.709 21:47:00 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:24:40.709 21:47:00 -- bdevperf/test_config.sh@44 -- # cleanup 00:24:40.709 21:47:00 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:24:40.709 21:47:00 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:40.709 00:24:40.709 real 0m15.666s 00:24:40.709 user 0m14.132s 00:24:40.709 sys 0m1.037s 00:24:40.710 ************************************ 00:24:40.710 END TEST bdevperf_config 00:24:40.710 ************************************ 00:24:40.710 21:47:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:40.710 21:47:00 -- common/autotest_common.sh@10 -- # set +x 00:24:40.710 21:47:00 -- spdk/autotest.sh@185 -- # uname -s 00:24:40.710 21:47:00 -- spdk/autotest.sh@185 -- # [[ Linux == Linux ]] 00:24:40.710 21:47:00 -- spdk/autotest.sh@186 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:40.710 21:47:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:40.710 21:47:00 -- common/autotest_common.sh@10 -- # set +x 00:24:40.710 ************************************ 00:24:40.710 START TEST reactor_set_interrupt 00:24:40.710 ************************************ 00:24:40.710 21:47:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:40.710 * Looking for test storage... 00:24:40.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.710 21:47:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:40.710 21:47:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:40.710 21:47:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:40.710 21:47:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:40.710 21:47:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:40.710 21:47:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:40.710 21:47:00 -- scripts/common.sh@335 -- # IFS=.-: 00:24:40.710 21:47:00 -- scripts/common.sh@335 -- # read -ra ver1 00:24:40.710 21:47:00 -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.710 21:47:00 -- scripts/common.sh@336 -- # read -ra ver2 00:24:40.710 21:47:00 -- scripts/common.sh@337 -- # local 'op=<' 00:24:40.710 21:47:00 -- scripts/common.sh@339 -- # ver1_l=2 00:24:40.710 21:47:00 -- scripts/common.sh@340 -- # ver2_l=1 00:24:40.710 21:47:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:40.710 21:47:00 -- scripts/common.sh@343 -- # case "$op" in 00:24:40.710 21:47:00 -- scripts/common.sh@344 -- # : 1 00:24:40.710 21:47:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:40.710 21:47:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.710 21:47:00 -- scripts/common.sh@364 -- # decimal 1 00:24:40.710 21:47:00 -- scripts/common.sh@352 -- # local d=1 00:24:40.710 21:47:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.710 21:47:00 -- scripts/common.sh@354 -- # echo 1 00:24:40.710 21:47:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:40.710 21:47:00 -- scripts/common.sh@365 -- # decimal 2 00:24:40.710 21:47:00 -- scripts/common.sh@352 -- # local d=2 00:24:40.710 21:47:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.710 21:47:00 -- scripts/common.sh@354 -- # echo 2 00:24:40.710 21:47:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:40.710 21:47:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:40.710 21:47:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:40.710 21:47:00 -- scripts/common.sh@367 -- # return 0 00:24:40.710 21:47:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.710 --rc genhtml_branch_coverage=1 00:24:40.710 --rc genhtml_function_coverage=1 00:24:40.710 --rc genhtml_legend=1 00:24:40.710 --rc geninfo_all_blocks=1 00:24:40.710 --rc geninfo_unexecuted_blocks=1 00:24:40.710 00:24:40.710 ' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.710 --rc genhtml_branch_coverage=1 00:24:40.710 --rc genhtml_function_coverage=1 00:24:40.710 --rc genhtml_legend=1 00:24:40.710 --rc geninfo_all_blocks=1 00:24:40.710 --rc geninfo_unexecuted_blocks=1 00:24:40.710 00:24:40.710 ' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.710 --rc genhtml_branch_coverage=1 00:24:40.710 --rc genhtml_function_coverage=1 00:24:40.710 --rc genhtml_legend=1 00:24:40.710 --rc geninfo_all_blocks=1 00:24:40.710 --rc geninfo_unexecuted_blocks=1 00:24:40.710 00:24:40.710 ' 00:24:40.710 21:47:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.710 --rc genhtml_branch_coverage=1 00:24:40.710 --rc genhtml_function_coverage=1 00:24:40.710 --rc genhtml_legend=1 00:24:40.710 --rc geninfo_all_blocks=1 00:24:40.710 --rc geninfo_unexecuted_blocks=1 00:24:40.710 00:24:40.710 ' 00:24:40.710 21:47:00 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:40.710 21:47:00 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:40.710 21:47:00 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:40.710 21:47:00 -- common/autotest_common.sh@34 -- # set -e 00:24:40.710 21:47:00 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:40.710 21:47:00 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:40.710 21:47:00 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:40.710 21:47:00 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:40.710 21:47:00 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:40.710 21:47:00 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:40.710 21:47:00 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:40.710 21:47:00 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:40.710 21:47:00 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:40.710 21:47:00 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:40.710 21:47:00 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:40.710 21:47:00 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:40.710 21:47:00 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:40.710 21:47:00 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:40.710 21:47:00 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:40.710 21:47:00 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:40.710 21:47:00 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:40.710 21:47:00 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:40.710 21:47:00 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:40.710 21:47:00 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:40.710 21:47:00 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:40.710 21:47:00 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:40.710 21:47:00 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:40.710 21:47:00 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:40.710 21:47:00 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:40.710 21:47:00 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:40.710 21:47:00 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:40.710 21:47:00 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:40.710 21:47:00 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:40.710 21:47:00 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:40.710 21:47:00 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:40.710 21:47:00 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:40.710 21:47:00 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:40.710 21:47:00 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:40.710 21:47:00 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:40.710 21:47:00 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:40.710 21:47:00 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:40.710 21:47:00 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:40.710 21:47:00 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:40.710 21:47:00 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:40.710 21:47:00 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:40.710 21:47:00 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:40.710 21:47:00 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:40.710 21:47:00 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:40.710 21:47:00 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:40.710 21:47:00 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:40.710 21:47:00 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:40.710 21:47:00 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:40.710 21:47:00 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:40.710 21:47:00 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:40.710 21:47:00 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:40.710 21:47:00 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:40.710 21:47:00 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:40.710 21:47:00 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:40.710 21:47:00 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:40.710 21:47:00 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:40.710 21:47:00 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:40.710 21:47:00 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:40.710 21:47:00 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:40.710 21:47:00 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:40.711 21:47:00 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:40.711 21:47:00 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:40.711 21:47:00 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:40.711 21:47:00 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:40.711 21:47:00 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:40.711 21:47:00 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:40.711 21:47:00 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:40.711 21:47:00 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:40.711 21:47:00 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:40.711 21:47:00 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:40.711 21:47:00 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:40.711 21:47:00 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:40.711 21:47:00 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:40.711 21:47:00 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:40.711 21:47:00 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:40.711 21:47:00 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:40.711 21:47:00 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:40.711 21:47:00 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:40.711 21:47:00 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:40.711 21:47:00 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:40.711 21:47:00 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:40.711 21:47:00 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:40.711 21:47:00 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:40.711 21:47:00 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:40.711 21:47:00 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:40.711 21:47:00 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:40.711 21:47:00 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:40.711 21:47:00 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:40.711 21:47:00 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:40.711 21:47:00 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:40.711 21:47:00 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:40.711 21:47:00 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:40.711 21:47:00 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:40.711 #define SPDK_CONFIG_H 00:24:40.711 #define SPDK_CONFIG_APPS 1 00:24:40.711 #define SPDK_CONFIG_ARCH native 00:24:40.711 #define SPDK_CONFIG_ASAN 1 00:24:40.711 #undef SPDK_CONFIG_AVAHI 00:24:40.711 #undef SPDK_CONFIG_CET 00:24:40.711 #define SPDK_CONFIG_COVERAGE 1 00:24:40.711 #define SPDK_CONFIG_CROSS_PREFIX 00:24:40.711 #undef SPDK_CONFIG_CRYPTO 00:24:40.711 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:40.711 #undef SPDK_CONFIG_CUSTOMOCF 00:24:40.711 #undef SPDK_CONFIG_DAOS 00:24:40.711 #define SPDK_CONFIG_DAOS_DIR 00:24:40.711 #define SPDK_CONFIG_DEBUG 1 00:24:40.711 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:40.711 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:40.711 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:40.711 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:40.711 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:40.711 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:40.711 #define SPDK_CONFIG_EXAMPLES 1 00:24:40.711 #undef SPDK_CONFIG_FC 00:24:40.711 #define SPDK_CONFIG_FC_PATH 00:24:40.711 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:40.711 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:40.711 #undef SPDK_CONFIG_FUSE 00:24:40.711 #undef SPDK_CONFIG_FUZZER 00:24:40.711 #define SPDK_CONFIG_FUZZER_LIB 00:24:40.711 #undef SPDK_CONFIG_GOLANG 00:24:40.711 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:40.711 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:40.711 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:40.711 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:40.711 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:40.711 #define SPDK_CONFIG_IDXD 1 00:24:40.711 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:40.711 #undef SPDK_CONFIG_IPSEC_MB 00:24:40.711 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:40.711 #define SPDK_CONFIG_ISAL 1 00:24:40.711 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:40.711 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:40.711 #define SPDK_CONFIG_LIBDIR 00:24:40.711 #undef SPDK_CONFIG_LTO 00:24:40.711 #define SPDK_CONFIG_MAX_LCORES 00:24:40.711 #define SPDK_CONFIG_NVME_CUSE 1 00:24:40.711 #undef SPDK_CONFIG_OCF 00:24:40.711 #define SPDK_CONFIG_OCF_PATH 00:24:40.711 #define SPDK_CONFIG_OPENSSL_PATH 00:24:40.711 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:40.711 #undef SPDK_CONFIG_PGO_USE 00:24:40.711 #define SPDK_CONFIG_PREFIX /usr/local 00:24:40.711 #define SPDK_CONFIG_RAID5F 1 00:24:40.711 #undef SPDK_CONFIG_RBD 00:24:40.711 #define SPDK_CONFIG_RDMA 1 00:24:40.711 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:40.711 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:40.711 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:40.711 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:40.711 #undef SPDK_CONFIG_SHARED 00:24:40.711 #undef SPDK_CONFIG_SMA 00:24:40.711 #define SPDK_CONFIG_TESTS 1 00:24:40.711 #undef SPDK_CONFIG_TSAN 00:24:40.711 #define SPDK_CONFIG_UBLK 1 00:24:40.711 #define SPDK_CONFIG_UBSAN 1 00:24:40.711 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:40.711 #undef SPDK_CONFIG_URING 00:24:40.711 #define SPDK_CONFIG_URING_PATH 00:24:40.711 #undef SPDK_CONFIG_URING_ZNS 00:24:40.711 #undef SPDK_CONFIG_USDT 00:24:40.711 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:40.711 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:40.711 #undef SPDK_CONFIG_VFIO_USER 00:24:40.711 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:40.711 #define SPDK_CONFIG_VHOST 1 00:24:40.711 #define SPDK_CONFIG_VIRTIO 1 00:24:40.711 #undef SPDK_CONFIG_VTUNE 00:24:40.711 #define SPDK_CONFIG_VTUNE_DIR 00:24:40.711 #define SPDK_CONFIG_WERROR 1 00:24:40.711 #define SPDK_CONFIG_WPDK_DIR 00:24:40.711 #undef SPDK_CONFIG_XNVME 00:24:40.711 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:40.711 21:47:00 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:40.711 21:47:00 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:40.711 21:47:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.711 21:47:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.711 21:47:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.711 21:47:00 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:40.711 21:47:00 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:40.711 21:47:00 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:40.711 21:47:00 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:40.711 21:47:00 -- paths/export.sh@6 -- # export PATH 00:24:40.711 21:47:00 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:40.711 21:47:00 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:40.711 21:47:00 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:40.711 21:47:00 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:40.711 21:47:00 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:40.711 21:47:00 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:40.711 21:47:00 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:40.711 21:47:00 -- pm/common@16 -- # TEST_TAG=N/A 00:24:40.711 21:47:00 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:40.711 21:47:00 -- common/autotest_common.sh@52 -- # : 1 00:24:40.711 21:47:00 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:40.711 21:47:00 -- common/autotest_common.sh@56 -- # : 0 00:24:40.711 21:47:00 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:40.711 21:47:00 -- common/autotest_common.sh@58 -- # : 0 00:24:40.711 21:47:00 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:40.711 21:47:00 -- common/autotest_common.sh@60 -- # : 1 00:24:40.711 21:47:00 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:40.711 21:47:00 -- common/autotest_common.sh@62 -- # : 1 00:24:40.711 21:47:00 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:40.711 21:47:00 -- common/autotest_common.sh@64 -- # : 00:24:40.711 21:47:00 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:40.711 21:47:00 -- common/autotest_common.sh@66 -- # : 0 00:24:40.711 21:47:00 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:40.711 21:47:00 -- common/autotest_common.sh@68 -- # : 0 00:24:40.711 21:47:00 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:40.711 21:47:00 -- common/autotest_common.sh@70 -- # : 0 00:24:40.711 21:47:00 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:40.712 21:47:00 -- common/autotest_common.sh@72 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:40.712 21:47:00 -- common/autotest_common.sh@74 -- # : 1 00:24:40.712 21:47:00 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:40.712 21:47:00 -- common/autotest_common.sh@76 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:40.712 21:47:00 -- common/autotest_common.sh@78 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:40.712 21:47:00 -- common/autotest_common.sh@80 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:40.712 21:47:00 -- common/autotest_common.sh@82 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:40.712 21:47:00 -- common/autotest_common.sh@84 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:40.712 21:47:00 -- common/autotest_common.sh@86 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:40.712 21:47:00 -- common/autotest_common.sh@88 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:40.712 21:47:00 -- common/autotest_common.sh@90 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:40.712 21:47:00 -- common/autotest_common.sh@92 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:40.712 21:47:00 -- common/autotest_common.sh@94 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:40.712 21:47:00 -- common/autotest_common.sh@96 -- # : rdma 00:24:40.712 21:47:00 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:40.712 21:47:00 -- common/autotest_common.sh@98 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:40.712 21:47:00 -- common/autotest_common.sh@100 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:40.712 21:47:00 -- common/autotest_common.sh@102 -- # : 1 00:24:40.712 21:47:00 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:40.712 21:47:00 -- common/autotest_common.sh@104 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:40.712 21:47:00 -- common/autotest_common.sh@106 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:40.712 21:47:00 -- common/autotest_common.sh@108 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:40.712 21:47:00 -- common/autotest_common.sh@110 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:40.712 21:47:00 -- common/autotest_common.sh@112 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:40.712 21:47:00 -- common/autotest_common.sh@114 -- # : 1 00:24:40.712 21:47:00 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:40.712 21:47:00 -- common/autotest_common.sh@116 -- # : 1 00:24:40.712 21:47:00 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:40.712 21:47:00 -- common/autotest_common.sh@118 -- # : 00:24:40.712 21:47:00 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:40.712 21:47:00 -- common/autotest_common.sh@120 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:40.712 21:47:00 -- common/autotest_common.sh@122 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:40.712 21:47:00 -- common/autotest_common.sh@124 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:40.712 21:47:00 -- common/autotest_common.sh@126 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:40.712 21:47:00 -- common/autotest_common.sh@128 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:40.712 21:47:00 -- common/autotest_common.sh@130 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:40.712 21:47:00 -- common/autotest_common.sh@132 -- # : 00:24:40.712 21:47:00 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:40.712 21:47:00 -- common/autotest_common.sh@134 -- # : true 00:24:40.712 21:47:00 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:40.712 21:47:00 -- common/autotest_common.sh@136 -- # : 1 00:24:40.712 21:47:00 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:40.712 21:47:00 -- common/autotest_common.sh@138 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:40.712 21:47:00 -- common/autotest_common.sh@140 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:40.712 21:47:00 -- common/autotest_common.sh@142 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:40.712 21:47:00 -- common/autotest_common.sh@144 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:40.712 21:47:00 -- common/autotest_common.sh@146 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:40.712 21:47:00 -- common/autotest_common.sh@148 -- # : 00:24:40.712 21:47:00 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:40.712 21:47:00 -- common/autotest_common.sh@150 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:40.712 21:47:00 -- common/autotest_common.sh@152 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:40.712 21:47:00 -- common/autotest_common.sh@154 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:40.712 21:47:00 -- common/autotest_common.sh@156 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:40.712 21:47:00 -- common/autotest_common.sh@158 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:40.712 21:47:00 -- common/autotest_common.sh@160 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:40.712 21:47:00 -- common/autotest_common.sh@163 -- # : 00:24:40.712 21:47:00 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:40.712 21:47:00 -- common/autotest_common.sh@165 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:40.712 21:47:00 -- common/autotest_common.sh@167 -- # : 0 00:24:40.712 21:47:00 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:40.712 21:47:00 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:40.712 21:47:00 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:40.712 21:47:00 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:40.712 21:47:00 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:40.712 21:47:00 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:40.712 21:47:00 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:40.712 21:47:00 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:40.712 21:47:00 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:40.712 21:47:00 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:40.712 21:47:00 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:40.712 21:47:00 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:40.712 21:47:00 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:40.712 21:47:00 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:40.712 21:47:00 -- common/autotest_common.sh@196 -- # cat 00:24:40.712 21:47:00 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:40.712 21:47:00 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:40.712 21:47:00 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:40.712 21:47:00 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:40.712 21:47:00 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:40.712 21:47:00 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:40.712 21:47:00 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:40.712 21:47:00 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:40.712 21:47:00 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:40.712 21:47:00 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:40.712 21:47:00 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:40.712 21:47:00 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:40.712 21:47:00 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:40.712 21:47:00 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:40.713 21:47:00 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:40.713 21:47:00 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:40.713 21:47:00 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:40.713 21:47:00 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:40.713 21:47:00 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:40.713 21:47:00 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:24:40.713 21:47:00 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:24:40.713 21:47:00 -- common/autotest_common.sh@249 -- # _LCOV= 00:24:40.713 21:47:00 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:24:40.713 21:47:00 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:24:40.713 21:47:00 -- common/autotest_common.sh@255 -- # lcov_opt= 00:24:40.713 21:47:00 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:24:40.713 21:47:00 -- common/autotest_common.sh@259 -- # export valgrind= 00:24:40.713 21:47:00 -- common/autotest_common.sh@259 -- # valgrind= 00:24:40.713 21:47:00 -- common/autotest_common.sh@265 -- # uname -s 00:24:40.713 21:47:00 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:24:40.713 21:47:00 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:24:40.713 21:47:00 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:24:40.713 21:47:00 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:24:40.713 21:47:00 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@275 -- # MAKE=make 00:24:40.713 21:47:00 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:24:40.713 21:47:00 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:24:40.713 21:47:00 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:24:40.713 21:47:00 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:40.713 21:47:00 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:24:40.713 21:47:00 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:24:40.713 21:47:00 -- common/autotest_common.sh@319 -- # [[ -z 87240 ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@319 -- # kill -0 87240 00:24:40.713 21:47:00 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:24:40.713 21:47:00 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:24:40.713 21:47:00 -- common/autotest_common.sh@332 -- # local mount target_dir 00:24:40.713 21:47:00 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:24:40.713 21:47:00 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:24:40.713 21:47:00 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:24:40.713 21:47:00 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:24:40.713 21:47:00 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.0H3Fpd 00:24:40.713 21:47:00 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:40.713 21:47:00 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.0H3Fpd/tests/interrupt /tmp/spdk.0H3Fpd 00:24:40.713 21:47:00 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@328 -- # df -T 00:24:40.713 21:47:00 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=10281889792 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=9382862848 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267523072 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:24:40.713 21:47:00 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # avails["$mount"]=98692534272 00:24:40.713 21:47:00 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:24:40.713 21:47:00 -- common/autotest_common.sh@364 -- # uses["$mount"]=1010245632 00:24:40.713 21:47:00 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:40.713 21:47:00 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:24:40.713 * Looking for test storage... 00:24:40.713 21:47:00 -- common/autotest_common.sh@369 -- # local target_space new_size 00:24:40.713 21:47:00 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:24:40.713 21:47:00 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:40.713 21:47:00 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.713 21:47:00 -- common/autotest_common.sh@373 -- # mount=/ 00:24:40.713 21:47:00 -- common/autotest_common.sh@375 -- # target_space=10281889792 00:24:40.713 21:47:00 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:24:40.713 21:47:00 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:24:40.713 21:47:00 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:24:40.713 21:47:00 -- common/autotest_common.sh@382 -- # new_size=11597455360 00:24:40.713 21:47:00 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:40.713 21:47:00 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.713 21:47:00 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.713 21:47:00 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:40.714 21:47:00 -- common/autotest_common.sh@390 -- # return 0 00:24:40.714 21:47:00 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:24:40.714 21:47:00 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:24:40.714 21:47:00 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:40.714 21:47:00 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1682 -- # true 00:24:40.714 21:47:00 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:24:40.714 21:47:00 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:40.714 21:47:00 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:40.714 21:47:00 -- common/autotest_common.sh@27 -- # exec 00:24:40.714 21:47:00 -- common/autotest_common.sh@29 -- # exec 00:24:40.714 21:47:00 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:40.714 21:47:00 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:40.714 21:47:00 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:40.714 21:47:00 -- common/autotest_common.sh@18 -- # set -x 00:24:40.714 21:47:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:40.714 21:47:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:40.714 21:47:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:40.714 21:47:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:40.714 21:47:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:40.714 21:47:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:40.714 21:47:00 -- scripts/common.sh@335 -- # IFS=.-: 00:24:40.714 21:47:00 -- scripts/common.sh@335 -- # read -ra ver1 00:24:40.714 21:47:00 -- scripts/common.sh@336 -- # IFS=.-: 00:24:40.714 21:47:00 -- scripts/common.sh@336 -- # read -ra ver2 00:24:40.714 21:47:00 -- scripts/common.sh@337 -- # local 'op=<' 00:24:40.714 21:47:00 -- scripts/common.sh@339 -- # ver1_l=2 00:24:40.714 21:47:00 -- scripts/common.sh@340 -- # ver2_l=1 00:24:40.714 21:47:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:40.714 21:47:00 -- scripts/common.sh@343 -- # case "$op" in 00:24:40.714 21:47:00 -- scripts/common.sh@344 -- # : 1 00:24:40.714 21:47:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:40.714 21:47:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:40.714 21:47:00 -- scripts/common.sh@364 -- # decimal 1 00:24:40.714 21:47:00 -- scripts/common.sh@352 -- # local d=1 00:24:40.714 21:47:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:40.714 21:47:00 -- scripts/common.sh@354 -- # echo 1 00:24:40.714 21:47:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:40.714 21:47:00 -- scripts/common.sh@365 -- # decimal 2 00:24:40.714 21:47:00 -- scripts/common.sh@352 -- # local d=2 00:24:40.714 21:47:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:40.714 21:47:00 -- scripts/common.sh@354 -- # echo 2 00:24:40.714 21:47:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:40.714 21:47:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:40.714 21:47:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:40.714 21:47:00 -- scripts/common.sh@367 -- # return 0 00:24:40.714 21:47:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.714 --rc genhtml_branch_coverage=1 00:24:40.714 --rc genhtml_function_coverage=1 00:24:40.714 --rc genhtml_legend=1 00:24:40.714 --rc geninfo_all_blocks=1 00:24:40.714 --rc geninfo_unexecuted_blocks=1 00:24:40.714 00:24:40.714 ' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.714 --rc genhtml_branch_coverage=1 00:24:40.714 --rc genhtml_function_coverage=1 00:24:40.714 --rc genhtml_legend=1 00:24:40.714 --rc geninfo_all_blocks=1 00:24:40.714 --rc geninfo_unexecuted_blocks=1 00:24:40.714 00:24:40.714 ' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.714 --rc genhtml_branch_coverage=1 00:24:40.714 --rc genhtml_function_coverage=1 00:24:40.714 --rc genhtml_legend=1 00:24:40.714 --rc geninfo_all_blocks=1 00:24:40.714 --rc geninfo_unexecuted_blocks=1 00:24:40.714 00:24:40.714 ' 00:24:40.714 21:47:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:40.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:40.714 --rc genhtml_branch_coverage=1 00:24:40.714 --rc genhtml_function_coverage=1 00:24:40.714 --rc genhtml_legend=1 00:24:40.714 --rc geninfo_all_blocks=1 00:24:40.714 --rc geninfo_unexecuted_blocks=1 00:24:40.714 00:24:40.714 ' 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:40.714 21:47:00 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:40.714 21:47:00 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:40.714 21:47:00 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87295 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:40.714 21:47:00 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87295 /var/tmp/spdk.sock 00:24:40.714 21:47:00 -- common/autotest_common.sh@829 -- # '[' -z 87295 ']' 00:24:40.714 21:47:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.714 21:47:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.714 21:47:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.714 21:47:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.714 21:47:00 -- common/autotest_common.sh@10 -- # set +x 00:24:40.714 [2024-12-06 21:47:00.911337] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:40.714 [2024-12-06 21:47:00.911535] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87295 ] 00:24:40.714 [2024-12-06 21:47:01.078940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:40.973 [2024-12-06 21:47:01.235228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.973 [2024-12-06 21:47:01.235332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.973 [2024-12-06 21:47:01.235354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.973 [2024-12-06 21:47:01.445743] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:41.540 21:47:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.540 21:47:01 -- common/autotest_common.sh@862 -- # return 0 00:24:41.540 21:47:01 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:24:41.540 21:47:01 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:41.800 Malloc0 00:24:41.800 Malloc1 00:24:41.800 Malloc2 00:24:41.800 21:47:02 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:24:41.800 21:47:02 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:41.800 21:47:02 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:41.800 21:47:02 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:41.800 5000+0 records in 00:24:41.800 5000+0 records out 00:24:41.800 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0210558 s, 486 MB/s 00:24:41.800 21:47:02 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:42.059 AIO0 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 87295 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 87295 without_thd 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87295 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:42.059 21:47:02 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.059 21:47:02 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:42.318 21:47:02 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:42.318 21:47:02 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:42.318 21:47:02 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:42.577 spdk_thread ids are 1 on reactor0. 00:24:42.577 21:47:02 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:42.577 21:47:02 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:42.577 21:47:02 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:42.577 21:47:02 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87295 0 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87295 0 idle 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:42.577 21:47:02 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87295 root 20 0 20.1t 147584 29824 S 10.0 1.2 0:00.60 reactor_0' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # echo 87295 root 20 0 20.1t 147584 29824 S 10.0 1.2 0:00.60 reactor_0 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:42.837 21:47:03 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:42.837 21:47:03 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87295 1 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87295 1 idle 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87303 root 20 0 20.1t 147584 29824 S 0.0 1.2 0:00.00 reactor_1' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # echo 87303 root 20 0 20.1t 147584 29824 S 0.0 1.2 0:00.00 reactor_1 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:42.837 21:47:03 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:42.837 21:47:03 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87295 2 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87295 2 idle 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:42.837 21:47:03 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87304 root 20 0 20.1t 147584 29824 S 0.0 1.2 0:00.00 reactor_2' 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@48 -- # echo 87304 root 20 0 20.1t 147584 29824 S 0.0 1.2 0:00.00 reactor_2 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:43.096 21:47:03 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:43.096 21:47:03 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:24:43.096 21:47:03 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:24:43.096 21:47:03 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:24:43.355 [2024-12-06 21:47:03.794402] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:43.355 21:47:03 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:43.614 [2024-12-06 21:47:04.042119] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:43.614 [2024-12-06 21:47:04.042994] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:43.614 21:47:04 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:43.888 [2024-12-06 21:47:04.286002] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:43.888 [2024-12-06 21:47:04.286732] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:43.888 21:47:04 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:43.888 21:47:04 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87295 0 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87295 0 busy 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:43.888 21:47:04 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:44.165 21:47:04 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87295 root 20 0 20.1t 150784 29824 R 99.9 1.2 0:01.08 reactor_0' 00:24:44.165 21:47:04 -- interrupt/interrupt_common.sh@48 -- # echo 87295 root 20 0 20.1t 150784 29824 R 99.9 1.2 0:01.08 reactor_0 00:24:44.165 21:47:04 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:44.166 21:47:04 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:44.166 21:47:04 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87295 2 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87295 2 busy 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:44.166 21:47:04 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87304 root 20 0 20.1t 150784 29824 R 90.9 1.2 0:00.44 reactor_2' 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@48 -- # echo 87304 root 20 0 20.1t 150784 29824 R 90.9 1.2 0:00.44 reactor_2 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=90.9 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=90 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@51 -- # [[ 90 -lt 70 ]] 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:44.432 21:47:04 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:44.432 21:47:04 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:44.691 [2024-12-06 21:47:04.934053] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:44.691 [2024-12-06 21:47:04.934741] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:44.691 21:47:04 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:24:44.691 21:47:04 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87295 2 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87295 2 idle 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:44.691 21:47:04 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87304 root 20 0 20.1t 150912 29824 S 0.0 1.2 0:00.64 reactor_2' 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@48 -- # echo 87304 root 20 0 20.1t 150912 29824 S 0.0 1.2 0:00.64 reactor_2 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:44.691 21:47:05 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:44.691 21:47:05 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:44.949 [2024-12-06 21:47:05.342004] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:44.949 [2024-12-06 21:47:05.342535] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:44.949 21:47:05 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:24:44.949 21:47:05 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:24:44.949 21:47:05 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:24:45.208 [2024-12-06 21:47:05.518368] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:45.208 21:47:05 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87295 0 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87295 0 idle 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@33 -- # local pid=87295 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:45.208 21:47:05 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:45.209 21:47:05 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87295 -w 256 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87295 root 20 0 20.1t 151040 29824 S 10.0 1.2 0:01.92 reactor_0' 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@48 -- # echo 87295 root 20 0 20.1t 151040 29824 S 10.0 1.2 0:01.92 reactor_0 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:24:45.468 21:47:05 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:45.468 21:47:05 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:45.468 21:47:05 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:24:45.468 21:47:05 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:24:45.468 21:47:05 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 87295 00:24:45.468 21:47:05 -- common/autotest_common.sh@936 -- # '[' -z 87295 ']' 00:24:45.468 21:47:05 -- common/autotest_common.sh@940 -- # kill -0 87295 00:24:45.468 21:47:05 -- common/autotest_common.sh@941 -- # uname 00:24:45.468 21:47:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:45.468 21:47:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87295 00:24:45.468 killing process with pid 87295 00:24:45.468 21:47:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:45.468 21:47:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:45.468 21:47:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87295' 00:24:45.468 21:47:05 -- common/autotest_common.sh@955 -- # kill 87295 00:24:45.468 21:47:05 -- common/autotest_common.sh@960 -- # wait 87295 00:24:46.405 21:47:06 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:24:46.405 21:47:06 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:46.405 21:47:06 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:24:46.405 21:47:06 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.405 21:47:06 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:46.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.405 21:47:06 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87438 00:24:46.405 21:47:06 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:46.406 21:47:06 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.406 21:47:06 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87438 /var/tmp/spdk.sock 00:24:46.406 21:47:06 -- common/autotest_common.sh@829 -- # '[' -z 87438 ']' 00:24:46.406 21:47:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.406 21:47:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.406 21:47:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.406 21:47:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.406 21:47:06 -- common/autotest_common.sh@10 -- # set +x 00:24:46.665 [2024-12-06 21:47:06.943087] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:46.665 [2024-12-06 21:47:06.943488] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87438 ] 00:24:46.665 [2024-12-06 21:47:07.111218] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:46.924 [2024-12-06 21:47:07.275768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.924 [2024-12-06 21:47:07.275870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.924 [2024-12-06 21:47:07.275892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.183 [2024-12-06 21:47:07.488546] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:47.441 21:47:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:47.441 21:47:07 -- common/autotest_common.sh@862 -- # return 0 00:24:47.441 21:47:07 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:24:47.441 21:47:07 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:47.700 Malloc0 00:24:47.700 Malloc1 00:24:47.700 Malloc2 00:24:47.700 21:47:08 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:24:47.700 21:47:08 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:47.700 21:47:08 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:47.700 21:47:08 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:47.959 5000+0 records in 00:24:47.959 5000+0 records out 00:24:47.959 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0218471 s, 469 MB/s 00:24:47.959 21:47:08 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:47.959 AIO0 00:24:48.218 21:47:08 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 87438 00:24:48.218 21:47:08 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 87438 00:24:48.219 21:47:08 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=87438 00:24:48.219 21:47:08 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:24:48.219 21:47:08 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:24:48.219 21:47:08 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:48.219 21:47:08 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:24:48.478 spdk_thread ids are 1 on reactor0. 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:48.478 21:47:08 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87438 0 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87438 0 idle 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:48.478 21:47:08 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87438 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.59 reactor_0' 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@48 -- # echo 87438 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.59 reactor_0 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:48.737 21:47:09 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:48.737 21:47:09 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87438 1 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87438 1 idle 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:48.737 21:47:09 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87441 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1' 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@48 -- # echo 87441 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_1 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:48.997 21:47:09 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:24:48.997 21:47:09 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 87438 2 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87438 2 idle 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:48.997 21:47:09 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87442 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2' 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@48 -- # echo 87442 root 20 0 20.1t 148992 30080 S 0.0 1.2 0:00.00 reactor_2 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:49.257 21:47:09 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:49.257 21:47:09 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:24:49.257 21:47:09 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:24:49.516 [2024-12-06 21:47:09.816843] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:24:49.516 [2024-12-06 21:47:09.817103] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:24:49.516 [2024-12-06 21:47:09.817721] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:49.516 21:47:09 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:24:49.774 [2024-12-06 21:47:10.072742] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:24:49.774 [2024-12-06 21:47:10.073214] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:49.774 21:47:10 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:49.774 21:47:10 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87438 0 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87438 0 busy 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:49.774 21:47:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87438 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:01.10 reactor_0' 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@48 -- # echo 87438 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:01.10 reactor_0 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:50.033 21:47:10 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:24:50.033 21:47:10 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 87438 2 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 87438 2 busy 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:50.033 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87442 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.44 reactor_2' 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@48 -- # echo 87442 root 20 0 20.1t 152320 30080 R 99.9 1.2 0:00.44 reactor_2 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:50.292 21:47:10 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:24:50.292 [2024-12-06 21:47:10.769008] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:24:50.292 [2024-12-06 21:47:10.769437] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:50.292 21:47:10 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:24:50.292 21:47:10 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 87438 2 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87438 2 idle 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:50.292 21:47:10 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:50.550 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:50.550 21:47:10 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:24:50.550 21:47:10 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87442 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.68 reactor_2' 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@48 -- # echo 87442 root 20 0 20.1t 152320 30080 S 0.0 1.2 0:00.68 reactor_2 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:24:50.550 21:47:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:50.550 21:47:11 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:24:50.809 [2024-12-06 21:47:11.177086] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:24:50.809 [2024-12-06 21:47:11.177667] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:24:50.809 [2024-12-06 21:47:11.177704] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:24:50.809 21:47:11 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:24:50.809 21:47:11 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 87438 0 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 87438 0 idle 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@33 -- # local pid=87438 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 87438 -w 256 00:24:50.809 21:47:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 87438 root 20 0 20.1t 152320 30080 S 10.0 1.2 0:01.98 reactor_0' 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@48 -- # echo 87438 root 20 0 20.1t 152320 30080 S 10.0 1.2 0:01.98 reactor_0 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=10.0 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=10 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@53 -- # [[ 10 -gt 30 ]] 00:24:51.069 21:47:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:24:51.069 21:47:11 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:24:51.069 21:47:11 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:24:51.069 21:47:11 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:51.069 21:47:11 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 87438 00:24:51.069 21:47:11 -- common/autotest_common.sh@936 -- # '[' -z 87438 ']' 00:24:51.069 21:47:11 -- common/autotest_common.sh@940 -- # kill -0 87438 00:24:51.069 21:47:11 -- common/autotest_common.sh@941 -- # uname 00:24:51.069 21:47:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:51.069 21:47:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87438 00:24:51.069 21:47:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:51.069 21:47:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:51.069 killing process with pid 87438 00:24:51.069 21:47:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87438' 00:24:51.069 21:47:11 -- common/autotest_common.sh@955 -- # kill 87438 00:24:51.069 21:47:11 -- common/autotest_common.sh@960 -- # wait 87438 00:24:52.447 21:47:12 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:52.447 ************************************ 00:24:52.447 END TEST reactor_set_interrupt 00:24:52.447 ************************************ 00:24:52.447 00:24:52.447 real 0m12.109s 00:24:52.447 user 0m11.802s 00:24:52.447 sys 0m1.668s 00:24:52.447 21:47:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:52.447 21:47:12 -- common/autotest_common.sh@10 -- # set +x 00:24:52.447 21:47:12 -- spdk/autotest.sh@187 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:52.447 21:47:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:52.447 21:47:12 -- common/autotest_common.sh@10 -- # set +x 00:24:52.447 ************************************ 00:24:52.447 START TEST reap_unregistered_poller 00:24:52.447 ************************************ 00:24:52.447 21:47:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:52.447 * Looking for test storage... 00:24:52.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.447 21:47:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:52.447 21:47:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:52.447 21:47:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:52.447 21:47:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:52.447 21:47:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:52.447 21:47:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:52.447 21:47:12 -- scripts/common.sh@335 -- # IFS=.-: 00:24:52.447 21:47:12 -- scripts/common.sh@335 -- # read -ra ver1 00:24:52.447 21:47:12 -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.447 21:47:12 -- scripts/common.sh@336 -- # read -ra ver2 00:24:52.447 21:47:12 -- scripts/common.sh@337 -- # local 'op=<' 00:24:52.447 21:47:12 -- scripts/common.sh@339 -- # ver1_l=2 00:24:52.447 21:47:12 -- scripts/common.sh@340 -- # ver2_l=1 00:24:52.447 21:47:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:52.447 21:47:12 -- scripts/common.sh@343 -- # case "$op" in 00:24:52.447 21:47:12 -- scripts/common.sh@344 -- # : 1 00:24:52.447 21:47:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:52.447 21:47:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.447 21:47:12 -- scripts/common.sh@364 -- # decimal 1 00:24:52.447 21:47:12 -- scripts/common.sh@352 -- # local d=1 00:24:52.447 21:47:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.447 21:47:12 -- scripts/common.sh@354 -- # echo 1 00:24:52.447 21:47:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:52.447 21:47:12 -- scripts/common.sh@365 -- # decimal 2 00:24:52.447 21:47:12 -- scripts/common.sh@352 -- # local d=2 00:24:52.447 21:47:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.447 21:47:12 -- scripts/common.sh@354 -- # echo 2 00:24:52.447 21:47:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:52.447 21:47:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:52.447 21:47:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:52.447 21:47:12 -- scripts/common.sh@367 -- # return 0 00:24:52.447 21:47:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.447 --rc genhtml_branch_coverage=1 00:24:52.447 --rc genhtml_function_coverage=1 00:24:52.447 --rc genhtml_legend=1 00:24:52.447 --rc geninfo_all_blocks=1 00:24:52.447 --rc geninfo_unexecuted_blocks=1 00:24:52.447 00:24:52.447 ' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.447 --rc genhtml_branch_coverage=1 00:24:52.447 --rc genhtml_function_coverage=1 00:24:52.447 --rc genhtml_legend=1 00:24:52.447 --rc geninfo_all_blocks=1 00:24:52.447 --rc geninfo_unexecuted_blocks=1 00:24:52.447 00:24:52.447 ' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.447 --rc genhtml_branch_coverage=1 00:24:52.447 --rc genhtml_function_coverage=1 00:24:52.447 --rc genhtml_legend=1 00:24:52.447 --rc geninfo_all_blocks=1 00:24:52.447 --rc geninfo_unexecuted_blocks=1 00:24:52.447 00:24:52.447 ' 00:24:52.447 21:47:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:52.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.447 --rc genhtml_branch_coverage=1 00:24:52.447 --rc genhtml_function_coverage=1 00:24:52.447 --rc genhtml_legend=1 00:24:52.447 --rc geninfo_all_blocks=1 00:24:52.447 --rc geninfo_unexecuted_blocks=1 00:24:52.447 00:24:52.447 ' 00:24:52.447 21:47:12 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:52.447 21:47:12 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:24:52.447 21:47:12 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:24:52.447 21:47:12 -- common/autotest_common.sh@34 -- # set -e 00:24:52.447 21:47:12 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:24:52.447 21:47:12 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:24:52.447 21:47:12 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:52.447 21:47:12 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:52.447 21:47:12 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:52.447 21:47:12 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:52.447 21:47:12 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:52.447 21:47:12 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:52.447 21:47:12 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:52.447 21:47:12 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:52.447 21:47:12 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:52.447 21:47:12 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:52.447 21:47:12 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:52.447 21:47:12 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:52.447 21:47:12 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:52.447 21:47:12 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:52.447 21:47:12 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:52.447 21:47:12 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:52.447 21:47:12 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:52.447 21:47:12 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:52.447 21:47:12 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:52.447 21:47:12 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:52.447 21:47:12 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:52.447 21:47:12 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:52.447 21:47:12 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:52.447 21:47:12 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:52.447 21:47:12 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:52.447 21:47:12 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:52.447 21:47:12 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:52.447 21:47:12 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:52.447 21:47:12 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:52.447 21:47:12 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:52.447 21:47:12 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:52.447 21:47:12 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:52.447 21:47:12 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:52.447 21:47:12 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:52.447 21:47:12 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:52.447 21:47:12 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:52.447 21:47:12 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:52.447 21:47:12 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:52.447 21:47:12 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:52.447 21:47:12 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:52.447 21:47:12 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:52.447 21:47:12 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:52.447 21:47:12 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:52.447 21:47:12 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:52.447 21:47:12 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:52.447 21:47:12 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:52.447 21:47:12 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:52.447 21:47:12 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:52.447 21:47:12 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:52.447 21:47:12 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:52.447 21:47:12 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:52.447 21:47:12 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:52.447 21:47:12 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:52.447 21:47:12 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:52.447 21:47:12 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:52.447 21:47:12 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:52.447 21:47:12 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:52.447 21:47:12 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:52.448 21:47:12 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:52.448 21:47:12 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:52.448 21:47:12 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:52.448 21:47:12 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:52.448 21:47:12 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:52.448 21:47:12 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:52.448 21:47:12 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:52.448 21:47:12 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:52.448 21:47:12 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:52.448 21:47:12 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:52.448 21:47:12 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:52.448 21:47:12 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:52.448 21:47:12 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:52.448 21:47:12 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:52.448 21:47:12 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:52.448 21:47:12 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:52.448 21:47:12 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:52.448 21:47:12 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:52.448 21:47:12 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:52.448 21:47:12 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:52.448 21:47:12 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:52.448 21:47:12 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:52.448 21:47:12 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:52.448 21:47:12 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:52.448 21:47:12 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:24:52.448 21:47:12 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:24:52.448 21:47:12 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:24:52.448 21:47:12 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:24:52.448 21:47:12 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:24:52.448 21:47:12 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:24:52.448 21:47:12 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:24:52.448 21:47:12 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:24:52.448 21:47:12 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:24:52.448 21:47:12 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:24:52.448 21:47:12 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:24:52.448 21:47:12 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:24:52.448 21:47:12 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:24:52.448 21:47:12 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:24:52.448 21:47:12 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:24:52.448 #define SPDK_CONFIG_H 00:24:52.448 #define SPDK_CONFIG_APPS 1 00:24:52.448 #define SPDK_CONFIG_ARCH native 00:24:52.448 #define SPDK_CONFIG_ASAN 1 00:24:52.448 #undef SPDK_CONFIG_AVAHI 00:24:52.448 #undef SPDK_CONFIG_CET 00:24:52.448 #define SPDK_CONFIG_COVERAGE 1 00:24:52.448 #define SPDK_CONFIG_CROSS_PREFIX 00:24:52.448 #undef SPDK_CONFIG_CRYPTO 00:24:52.448 #undef SPDK_CONFIG_CRYPTO_MLX5 00:24:52.448 #undef SPDK_CONFIG_CUSTOMOCF 00:24:52.448 #undef SPDK_CONFIG_DAOS 00:24:52.448 #define SPDK_CONFIG_DAOS_DIR 00:24:52.448 #define SPDK_CONFIG_DEBUG 1 00:24:52.448 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:24:52.448 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:24:52.448 #define SPDK_CONFIG_DPDK_INC_DIR 00:24:52.448 #define SPDK_CONFIG_DPDK_LIB_DIR 00:24:52.448 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:24:52.448 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:52.448 #define SPDK_CONFIG_EXAMPLES 1 00:24:52.448 #undef SPDK_CONFIG_FC 00:24:52.448 #define SPDK_CONFIG_FC_PATH 00:24:52.448 #define SPDK_CONFIG_FIO_PLUGIN 1 00:24:52.448 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:24:52.448 #undef SPDK_CONFIG_FUSE 00:24:52.448 #undef SPDK_CONFIG_FUZZER 00:24:52.448 #define SPDK_CONFIG_FUZZER_LIB 00:24:52.448 #undef SPDK_CONFIG_GOLANG 00:24:52.448 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:24:52.448 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:24:52.448 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:24:52.448 #undef SPDK_CONFIG_HAVE_LIBBSD 00:24:52.448 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:24:52.448 #define SPDK_CONFIG_IDXD 1 00:24:52.448 #define SPDK_CONFIG_IDXD_KERNEL 1 00:24:52.448 #undef SPDK_CONFIG_IPSEC_MB 00:24:52.448 #define SPDK_CONFIG_IPSEC_MB_DIR 00:24:52.448 #define SPDK_CONFIG_ISAL 1 00:24:52.448 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:24:52.448 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:24:52.448 #define SPDK_CONFIG_LIBDIR 00:24:52.448 #undef SPDK_CONFIG_LTO 00:24:52.448 #define SPDK_CONFIG_MAX_LCORES 00:24:52.448 #define SPDK_CONFIG_NVME_CUSE 1 00:24:52.448 #undef SPDK_CONFIG_OCF 00:24:52.448 #define SPDK_CONFIG_OCF_PATH 00:24:52.448 #define SPDK_CONFIG_OPENSSL_PATH 00:24:52.448 #undef SPDK_CONFIG_PGO_CAPTURE 00:24:52.448 #undef SPDK_CONFIG_PGO_USE 00:24:52.448 #define SPDK_CONFIG_PREFIX /usr/local 00:24:52.448 #define SPDK_CONFIG_RAID5F 1 00:24:52.448 #undef SPDK_CONFIG_RBD 00:24:52.448 #define SPDK_CONFIG_RDMA 1 00:24:52.448 #define SPDK_CONFIG_RDMA_PROV verbs 00:24:52.448 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:24:52.448 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:24:52.448 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:24:52.448 #undef SPDK_CONFIG_SHARED 00:24:52.448 #undef SPDK_CONFIG_SMA 00:24:52.448 #define SPDK_CONFIG_TESTS 1 00:24:52.448 #undef SPDK_CONFIG_TSAN 00:24:52.448 #define SPDK_CONFIG_UBLK 1 00:24:52.448 #define SPDK_CONFIG_UBSAN 1 00:24:52.448 #define SPDK_CONFIG_UNIT_TESTS 1 00:24:52.448 #undef SPDK_CONFIG_URING 00:24:52.448 #define SPDK_CONFIG_URING_PATH 00:24:52.448 #undef SPDK_CONFIG_URING_ZNS 00:24:52.448 #undef SPDK_CONFIG_USDT 00:24:52.448 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:24:52.448 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:24:52.448 #undef SPDK_CONFIG_VFIO_USER 00:24:52.448 #define SPDK_CONFIG_VFIO_USER_DIR 00:24:52.448 #define SPDK_CONFIG_VHOST 1 00:24:52.448 #define SPDK_CONFIG_VIRTIO 1 00:24:52.448 #undef SPDK_CONFIG_VTUNE 00:24:52.448 #define SPDK_CONFIG_VTUNE_DIR 00:24:52.448 #define SPDK_CONFIG_WERROR 1 00:24:52.448 #define SPDK_CONFIG_WPDK_DIR 00:24:52.448 #undef SPDK_CONFIG_XNVME 00:24:52.448 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:24:52.448 21:47:12 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:24:52.448 21:47:12 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:52.448 21:47:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:52.448 21:47:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:52.448 21:47:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:52.448 21:47:12 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:52.448 21:47:12 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:52.448 21:47:12 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:52.448 21:47:12 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:52.448 21:47:12 -- paths/export.sh@6 -- # export PATH 00:24:52.448 21:47:12 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:52.448 21:47:12 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:52.448 21:47:12 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:24:52.448 21:47:12 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:52.448 21:47:12 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:24:52.448 21:47:12 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:24:52.448 21:47:12 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:24:52.448 21:47:12 -- pm/common@16 -- # TEST_TAG=N/A 00:24:52.448 21:47:12 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:24:52.448 21:47:12 -- common/autotest_common.sh@52 -- # : 1 00:24:52.448 21:47:12 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:24:52.448 21:47:12 -- common/autotest_common.sh@56 -- # : 0 00:24:52.448 21:47:12 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:24:52.448 21:47:12 -- common/autotest_common.sh@58 -- # : 0 00:24:52.448 21:47:12 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:24:52.448 21:47:12 -- common/autotest_common.sh@60 -- # : 1 00:24:52.448 21:47:12 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:24:52.448 21:47:12 -- common/autotest_common.sh@62 -- # : 1 00:24:52.448 21:47:12 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:24:52.448 21:47:12 -- common/autotest_common.sh@64 -- # : 00:24:52.448 21:47:12 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:24:52.448 21:47:12 -- common/autotest_common.sh@66 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:24:52.449 21:47:12 -- common/autotest_common.sh@68 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:24:52.449 21:47:12 -- common/autotest_common.sh@70 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:24:52.449 21:47:12 -- common/autotest_common.sh@72 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:24:52.449 21:47:12 -- common/autotest_common.sh@74 -- # : 1 00:24:52.449 21:47:12 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:24:52.449 21:47:12 -- common/autotest_common.sh@76 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:24:52.449 21:47:12 -- common/autotest_common.sh@78 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:24:52.449 21:47:12 -- common/autotest_common.sh@80 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:24:52.449 21:47:12 -- common/autotest_common.sh@82 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:24:52.449 21:47:12 -- common/autotest_common.sh@84 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:24:52.449 21:47:12 -- common/autotest_common.sh@86 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:24:52.449 21:47:12 -- common/autotest_common.sh@88 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:24:52.449 21:47:12 -- common/autotest_common.sh@90 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:24:52.449 21:47:12 -- common/autotest_common.sh@92 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:24:52.449 21:47:12 -- common/autotest_common.sh@94 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:24:52.449 21:47:12 -- common/autotest_common.sh@96 -- # : rdma 00:24:52.449 21:47:12 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:24:52.449 21:47:12 -- common/autotest_common.sh@98 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:24:52.449 21:47:12 -- common/autotest_common.sh@100 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:24:52.449 21:47:12 -- common/autotest_common.sh@102 -- # : 1 00:24:52.449 21:47:12 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:24:52.449 21:47:12 -- common/autotest_common.sh@104 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:24:52.449 21:47:12 -- common/autotest_common.sh@106 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:24:52.449 21:47:12 -- common/autotest_common.sh@108 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:24:52.449 21:47:12 -- common/autotest_common.sh@110 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:24:52.449 21:47:12 -- common/autotest_common.sh@112 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:24:52.449 21:47:12 -- common/autotest_common.sh@114 -- # : 1 00:24:52.449 21:47:12 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:24:52.449 21:47:12 -- common/autotest_common.sh@116 -- # : 1 00:24:52.449 21:47:12 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:24:52.449 21:47:12 -- common/autotest_common.sh@118 -- # : 00:24:52.449 21:47:12 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:24:52.449 21:47:12 -- common/autotest_common.sh@120 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:24:52.449 21:47:12 -- common/autotest_common.sh@122 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:24:52.449 21:47:12 -- common/autotest_common.sh@124 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:24:52.449 21:47:12 -- common/autotest_common.sh@126 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:24:52.449 21:47:12 -- common/autotest_common.sh@128 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:24:52.449 21:47:12 -- common/autotest_common.sh@130 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:24:52.449 21:47:12 -- common/autotest_common.sh@132 -- # : 00:24:52.449 21:47:12 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:24:52.449 21:47:12 -- common/autotest_common.sh@134 -- # : true 00:24:52.449 21:47:12 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:24:52.449 21:47:12 -- common/autotest_common.sh@136 -- # : 1 00:24:52.449 21:47:12 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:24:52.449 21:47:12 -- common/autotest_common.sh@138 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:24:52.449 21:47:12 -- common/autotest_common.sh@140 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:24:52.449 21:47:12 -- common/autotest_common.sh@142 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:24:52.449 21:47:12 -- common/autotest_common.sh@144 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:24:52.449 21:47:12 -- common/autotest_common.sh@146 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:24:52.449 21:47:12 -- common/autotest_common.sh@148 -- # : 00:24:52.449 21:47:12 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:24:52.449 21:47:12 -- common/autotest_common.sh@150 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:24:52.449 21:47:12 -- common/autotest_common.sh@152 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:24:52.449 21:47:12 -- common/autotest_common.sh@154 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:24:52.449 21:47:12 -- common/autotest_common.sh@156 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:24:52.449 21:47:12 -- common/autotest_common.sh@158 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:24:52.449 21:47:12 -- common/autotest_common.sh@160 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:24:52.449 21:47:12 -- common/autotest_common.sh@163 -- # : 00:24:52.449 21:47:12 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:24:52.449 21:47:12 -- common/autotest_common.sh@165 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:24:52.449 21:47:12 -- common/autotest_common.sh@167 -- # : 0 00:24:52.449 21:47:12 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:24:52.449 21:47:12 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:24:52.449 21:47:12 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:24:52.449 21:47:12 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:24:52.449 21:47:12 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:52.449 21:47:12 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:24:52.449 21:47:12 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:24:52.449 21:47:12 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:24:52.449 21:47:12 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:52.449 21:47:12 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:24:52.449 21:47:12 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:52.449 21:47:12 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:24:52.449 21:47:12 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:24:52.449 21:47:12 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:24:52.449 21:47:12 -- common/autotest_common.sh@196 -- # cat 00:24:52.449 21:47:12 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:24:52.449 21:47:12 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:52.449 21:47:12 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:24:52.449 21:47:12 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:52.449 21:47:12 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:24:52.449 21:47:12 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:24:52.449 21:47:12 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:24:52.449 21:47:12 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:52.449 21:47:12 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:24:52.449 21:47:12 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:52.449 21:47:12 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:24:52.450 21:47:12 -- common/autotest_common.sh@239 -- # export QEMU_BIN= 00:24:52.450 21:47:12 -- common/autotest_common.sh@239 -- # QEMU_BIN= 00:24:52.450 21:47:12 -- common/autotest_common.sh@240 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:52.450 21:47:12 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:24:52.450 21:47:12 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:52.450 21:47:12 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:24:52.450 21:47:12 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:52.450 21:47:12 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:24:52.450 21:47:12 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:24:52.450 21:47:12 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:24:52.450 21:47:12 -- common/autotest_common.sh@249 -- # _LCOV= 00:24:52.450 21:47:12 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:24:52.450 21:47:12 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:24:52.450 21:47:12 -- common/autotest_common.sh@255 -- # lcov_opt= 00:24:52.450 21:47:12 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:24:52.450 21:47:12 -- common/autotest_common.sh@259 -- # export valgrind= 00:24:52.450 21:47:12 -- common/autotest_common.sh@259 -- # valgrind= 00:24:52.450 21:47:12 -- common/autotest_common.sh@265 -- # uname -s 00:24:52.450 21:47:12 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:24:52.450 21:47:12 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:24:52.450 21:47:12 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:24:52.450 21:47:12 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:24:52.450 21:47:12 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@275 -- # MAKE=make 00:24:52.450 21:47:12 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j10 00:24:52.450 21:47:12 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:24:52.450 21:47:12 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:24:52.450 21:47:12 -- common/autotest_common.sh@294 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:24:52.450 21:47:12 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:24:52.450 21:47:12 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:24:52.450 21:47:12 -- common/autotest_common.sh@319 -- # [[ -z 87609 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@319 -- # kill -0 87609 00:24:52.450 21:47:12 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:24:52.450 21:47:12 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:24:52.450 21:47:12 -- common/autotest_common.sh@332 -- # local mount target_dir 00:24:52.450 21:47:12 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:24:52.450 21:47:12 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:24:52.450 21:47:12 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:24:52.450 21:47:12 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:24:52.450 21:47:12 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.SWQb3K 00:24:52.450 21:47:12 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:24:52.450 21:47:12 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@356 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.SWQb3K/tests/interrupt /tmp/spdk.SWQb3K 00:24:52.450 21:47:12 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@328 -- # df -T 00:24:52.450 21:47:12 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=1249312768 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254027264 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=4714496 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda1 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=10281848832 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=19681529856 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=9382903808 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=6267523072 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6270115840 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=2592768 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=5242880 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5242880 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda16 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext4 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=777306112 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=923156480 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=81207296 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/vda15 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=vfat 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=103000064 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=109395968 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=6395904 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=1254010880 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=1254023168 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output 00:24:52.450 21:47:12 -- common/autotest_common.sh@362 -- # fss["$mount"]=fuse.sshfs 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # avails["$mount"]=98692431872 00:24:52.450 21:47:12 -- common/autotest_common.sh@363 -- # sizes["$mount"]=105088212992 00:24:52.450 21:47:12 -- common/autotest_common.sh@364 -- # uses["$mount"]=1010348032 00:24:52.450 21:47:12 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:24:52.450 21:47:12 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:24:52.450 * Looking for test storage... 00:24:52.450 21:47:12 -- common/autotest_common.sh@369 -- # local target_space new_size 00:24:52.450 21:47:12 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:24:52.450 21:47:12 -- common/autotest_common.sh@373 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.450 21:47:12 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:24:52.450 21:47:12 -- common/autotest_common.sh@373 -- # mount=/ 00:24:52.450 21:47:12 -- common/autotest_common.sh@375 -- # target_space=10281848832 00:24:52.450 21:47:12 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:24:52.450 21:47:12 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:24:52.450 21:47:12 -- common/autotest_common.sh@381 -- # [[ ext4 == tmpfs ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@381 -- # [[ ext4 == ramfs ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@382 -- # new_size=11597496320 00:24:52.450 21:47:12 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:24:52.450 21:47:12 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.450 21:47:12 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.450 21:47:12 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:24:52.450 21:47:12 -- common/autotest_common.sh@390 -- # return 0 00:24:52.450 21:47:12 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:24:52.450 21:47:12 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:24:52.450 21:47:12 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:24:52.450 21:47:12 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:24:52.450 21:47:12 -- common/autotest_common.sh@1682 -- # true 00:24:52.450 21:47:12 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:24:52.450 21:47:12 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@27 -- # exec 00:24:52.450 21:47:12 -- common/autotest_common.sh@29 -- # exec 00:24:52.450 21:47:12 -- common/autotest_common.sh@31 -- # xtrace_restore 00:24:52.450 21:47:12 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:24:52.450 21:47:12 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:24:52.450 21:47:12 -- common/autotest_common.sh@18 -- # set -x 00:24:52.450 21:47:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:52.450 21:47:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:52.450 21:47:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:52.710 21:47:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:52.710 21:47:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:52.710 21:47:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:52.710 21:47:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:52.710 21:47:13 -- scripts/common.sh@335 -- # IFS=.-: 00:24:52.710 21:47:13 -- scripts/common.sh@335 -- # read -ra ver1 00:24:52.710 21:47:13 -- scripts/common.sh@336 -- # IFS=.-: 00:24:52.710 21:47:13 -- scripts/common.sh@336 -- # read -ra ver2 00:24:52.710 21:47:13 -- scripts/common.sh@337 -- # local 'op=<' 00:24:52.710 21:47:13 -- scripts/common.sh@339 -- # ver1_l=2 00:24:52.710 21:47:13 -- scripts/common.sh@340 -- # ver2_l=1 00:24:52.710 21:47:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:52.710 21:47:13 -- scripts/common.sh@343 -- # case "$op" in 00:24:52.710 21:47:13 -- scripts/common.sh@344 -- # : 1 00:24:52.710 21:47:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:52.710 21:47:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:52.710 21:47:13 -- scripts/common.sh@364 -- # decimal 1 00:24:52.710 21:47:13 -- scripts/common.sh@352 -- # local d=1 00:24:52.710 21:47:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:52.710 21:47:13 -- scripts/common.sh@354 -- # echo 1 00:24:52.710 21:47:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:52.710 21:47:13 -- scripts/common.sh@365 -- # decimal 2 00:24:52.710 21:47:13 -- scripts/common.sh@352 -- # local d=2 00:24:52.710 21:47:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:52.710 21:47:13 -- scripts/common.sh@354 -- # echo 2 00:24:52.710 21:47:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:52.710 21:47:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:52.710 21:47:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:52.710 21:47:13 -- scripts/common.sh@367 -- # return 0 00:24:52.710 21:47:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:52.710 21:47:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:52.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.710 --rc genhtml_branch_coverage=1 00:24:52.710 --rc genhtml_function_coverage=1 00:24:52.710 --rc genhtml_legend=1 00:24:52.710 --rc geninfo_all_blocks=1 00:24:52.710 --rc geninfo_unexecuted_blocks=1 00:24:52.710 00:24:52.710 ' 00:24:52.710 21:47:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:52.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.710 --rc genhtml_branch_coverage=1 00:24:52.710 --rc genhtml_function_coverage=1 00:24:52.710 --rc genhtml_legend=1 00:24:52.710 --rc geninfo_all_blocks=1 00:24:52.710 --rc geninfo_unexecuted_blocks=1 00:24:52.710 00:24:52.710 ' 00:24:52.710 21:47:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:52.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.710 --rc genhtml_branch_coverage=1 00:24:52.710 --rc genhtml_function_coverage=1 00:24:52.710 --rc genhtml_legend=1 00:24:52.710 --rc geninfo_all_blocks=1 00:24:52.710 --rc geninfo_unexecuted_blocks=1 00:24:52.710 00:24:52.710 ' 00:24:52.710 21:47:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:52.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:52.710 --rc genhtml_branch_coverage=1 00:24:52.710 --rc genhtml_function_coverage=1 00:24:52.710 --rc genhtml_legend=1 00:24:52.710 --rc geninfo_all_blocks=1 00:24:52.710 --rc geninfo_unexecuted_blocks=1 00:24:52.710 00:24:52.710 ' 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:24:52.710 21:47:13 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:52.710 21:47:13 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:24:52.710 21:47:13 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=87664 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:52.710 21:47:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 87664 /var/tmp/spdk.sock 00:24:52.710 21:47:13 -- common/autotest_common.sh@829 -- # '[' -z 87664 ']' 00:24:52.710 21:47:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.710 21:47:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:52.710 21:47:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.710 21:47:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:52.710 21:47:13 -- common/autotest_common.sh@10 -- # set +x 00:24:52.710 [2024-12-06 21:47:13.058583] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:52.710 [2024-12-06 21:47:13.058725] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87664 ] 00:24:52.969 [2024-12-06 21:47:13.211571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:52.969 [2024-12-06 21:47:13.378903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.969 [2024-12-06 21:47:13.379009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.969 [2024-12-06 21:47:13.379029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.246 [2024-12-06 21:47:13.593210] thread.c:2087:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:24:53.812 21:47:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.812 21:47:14 -- common/autotest_common.sh@862 -- # return 0 00:24:53.812 21:47:14 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:24:53.812 21:47:14 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:24:53.813 21:47:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.813 21:47:14 -- common/autotest_common.sh@10 -- # set +x 00:24:53.813 21:47:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:24:53.813 "name": "app_thread", 00:24:53.813 "id": 1, 00:24:53.813 "active_pollers": [], 00:24:53.813 "timed_pollers": [ 00:24:53.813 { 00:24:53.813 "name": "rpc_subsystem_poll", 00:24:53.813 "id": 1, 00:24:53.813 "state": "waiting", 00:24:53.813 "run_count": 0, 00:24:53.813 "busy_count": 0, 00:24:53.813 "period_ticks": 8800000 00:24:53.813 } 00:24:53.813 ], 00:24:53.813 "paused_pollers": [] 00:24:53.813 }' 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:24:53.813 21:47:14 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:24:53.813 21:47:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:24:53.813 21:47:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:24:53.813 21:47:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:24:53.813 5000+0 records in 00:24:53.813 5000+0 records out 00:24:53.813 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0200901 s, 510 MB/s 00:24:53.813 21:47:14 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:24:54.071 AIO0 00:24:54.071 21:47:14 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:24:54.330 21:47:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.330 21:47:14 -- common/autotest_common.sh@10 -- # set +x 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:24:54.330 21:47:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:24:54.330 "name": "app_thread", 00:24:54.330 "id": 1, 00:24:54.330 "active_pollers": [], 00:24:54.330 "timed_pollers": [ 00:24:54.330 { 00:24:54.330 "name": "rpc_subsystem_poll", 00:24:54.330 "id": 1, 00:24:54.330 "state": "waiting", 00:24:54.330 "run_count": 0, 00:24:54.330 "busy_count": 0, 00:24:54.330 "period_ticks": 8800000 00:24:54.330 } 00:24:54.330 ], 00:24:54.330 "paused_pollers": [] 00:24:54.330 }' 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:24:54.330 21:47:14 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 87664 00:24:54.330 21:47:14 -- common/autotest_common.sh@936 -- # '[' -z 87664 ']' 00:24:54.330 21:47:14 -- common/autotest_common.sh@940 -- # kill -0 87664 00:24:54.330 21:47:14 -- common/autotest_common.sh@941 -- # uname 00:24:54.330 21:47:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:54.330 21:47:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87664 00:24:54.330 21:47:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:54.330 21:47:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:54.330 killing process with pid 87664 00:24:54.330 21:47:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87664' 00:24:54.330 21:47:14 -- common/autotest_common.sh@955 -- # kill 87664 00:24:54.330 21:47:14 -- common/autotest_common.sh@960 -- # wait 87664 00:24:55.746 21:47:15 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:24:55.746 21:47:15 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:24:55.746 ************************************ 00:24:55.746 END TEST reap_unregistered_poller 00:24:55.746 ************************************ 00:24:55.746 00:24:55.746 real 0m3.224s 00:24:55.746 user 0m2.593s 00:24:55.746 sys 0m0.516s 00:24:55.746 21:47:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:55.746 21:47:15 -- common/autotest_common.sh@10 -- # set +x 00:24:55.746 21:47:15 -- spdk/autotest.sh@191 -- # uname -s 00:24:55.746 21:47:15 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:24:55.746 21:47:15 -- spdk/autotest.sh@192 -- # [[ 1 -eq 1 ]] 00:24:55.746 21:47:15 -- spdk/autotest.sh@198 -- # [[ 0 -eq 0 ]] 00:24:55.746 21:47:15 -- spdk/autotest.sh@199 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:55.746 21:47:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:55.746 21:47:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:55.746 21:47:15 -- common/autotest_common.sh@10 -- # set +x 00:24:55.746 ************************************ 00:24:55.746 START TEST spdk_dd 00:24:55.746 ************************************ 00:24:55.746 21:47:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:24:55.746 * Looking for test storage... 00:24:55.746 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:55.746 21:47:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:55.746 21:47:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:55.746 21:47:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:55.746 21:47:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:55.746 21:47:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:55.746 21:47:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:55.746 21:47:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:55.746 21:47:16 -- scripts/common.sh@335 -- # IFS=.-: 00:24:55.746 21:47:16 -- scripts/common.sh@335 -- # read -ra ver1 00:24:55.746 21:47:16 -- scripts/common.sh@336 -- # IFS=.-: 00:24:55.746 21:47:16 -- scripts/common.sh@336 -- # read -ra ver2 00:24:55.746 21:47:16 -- scripts/common.sh@337 -- # local 'op=<' 00:24:55.746 21:47:16 -- scripts/common.sh@339 -- # ver1_l=2 00:24:55.746 21:47:16 -- scripts/common.sh@340 -- # ver2_l=1 00:24:55.746 21:47:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:55.746 21:47:16 -- scripts/common.sh@343 -- # case "$op" in 00:24:55.746 21:47:16 -- scripts/common.sh@344 -- # : 1 00:24:55.746 21:47:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:55.746 21:47:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:55.746 21:47:16 -- scripts/common.sh@364 -- # decimal 1 00:24:55.746 21:47:16 -- scripts/common.sh@352 -- # local d=1 00:24:55.746 21:47:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:55.746 21:47:16 -- scripts/common.sh@354 -- # echo 1 00:24:55.746 21:47:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:55.746 21:47:16 -- scripts/common.sh@365 -- # decimal 2 00:24:55.746 21:47:16 -- scripts/common.sh@352 -- # local d=2 00:24:55.746 21:47:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:55.746 21:47:16 -- scripts/common.sh@354 -- # echo 2 00:24:55.746 21:47:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:55.746 21:47:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:55.746 21:47:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:55.746 21:47:16 -- scripts/common.sh@367 -- # return 0 00:24:55.746 21:47:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:55.746 21:47:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:55.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.746 --rc genhtml_branch_coverage=1 00:24:55.746 --rc genhtml_function_coverage=1 00:24:55.746 --rc genhtml_legend=1 00:24:55.746 --rc geninfo_all_blocks=1 00:24:55.746 --rc geninfo_unexecuted_blocks=1 00:24:55.746 00:24:55.746 ' 00:24:55.746 21:47:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:55.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.746 --rc genhtml_branch_coverage=1 00:24:55.746 --rc genhtml_function_coverage=1 00:24:55.746 --rc genhtml_legend=1 00:24:55.746 --rc geninfo_all_blocks=1 00:24:55.746 --rc geninfo_unexecuted_blocks=1 00:24:55.746 00:24:55.746 ' 00:24:55.746 21:47:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:55.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.746 --rc genhtml_branch_coverage=1 00:24:55.746 --rc genhtml_function_coverage=1 00:24:55.746 --rc genhtml_legend=1 00:24:55.746 --rc geninfo_all_blocks=1 00:24:55.746 --rc geninfo_unexecuted_blocks=1 00:24:55.746 00:24:55.746 ' 00:24:55.746 21:47:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:55.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:55.746 --rc genhtml_branch_coverage=1 00:24:55.746 --rc genhtml_function_coverage=1 00:24:55.746 --rc genhtml_legend=1 00:24:55.746 --rc geninfo_all_blocks=1 00:24:55.746 --rc geninfo_unexecuted_blocks=1 00:24:55.746 00:24:55.746 ' 00:24:55.746 21:47:16 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:55.746 21:47:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.747 21:47:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.747 21:47:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.747 21:47:16 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.747 21:47:16 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.747 21:47:16 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.747 21:47:16 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.747 21:47:16 -- paths/export.sh@6 -- # export PATH 00:24:55.747 21:47:16 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:55.747 21:47:16 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:56.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:24:56.005 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:56.572 21:47:16 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:24:56.572 21:47:16 -- dd/dd.sh@11 -- # nvme_in_userspace 00:24:56.572 21:47:16 -- scripts/common.sh@311 -- # local bdf bdfs 00:24:56.572 21:47:16 -- scripts/common.sh@312 -- # local nvmes 00:24:56.572 21:47:16 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:24:56.572 21:47:16 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:56.572 21:47:16 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:24:56.572 21:47:16 -- scripts/common.sh@297 -- # local bdf= 00:24:56.572 21:47:16 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:24:56.572 21:47:16 -- scripts/common.sh@232 -- # local class 00:24:56.572 21:47:16 -- scripts/common.sh@233 -- # local subclass 00:24:56.572 21:47:16 -- scripts/common.sh@234 -- # local progif 00:24:56.572 21:47:16 -- scripts/common.sh@235 -- # printf %02x 1 00:24:56.572 21:47:16 -- scripts/common.sh@235 -- # class=01 00:24:56.572 21:47:16 -- scripts/common.sh@236 -- # printf %02x 8 00:24:56.572 21:47:16 -- scripts/common.sh@236 -- # subclass=08 00:24:56.572 21:47:16 -- scripts/common.sh@237 -- # printf %02x 2 00:24:56.572 21:47:16 -- scripts/common.sh@237 -- # progif=02 00:24:56.572 21:47:16 -- scripts/common.sh@239 -- # hash lspci 00:24:56.572 21:47:16 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:24:56.572 21:47:16 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:24:56.572 21:47:16 -- scripts/common.sh@242 -- # grep -i -- -p02 00:24:56.572 21:47:16 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:56.572 21:47:16 -- scripts/common.sh@244 -- # tr -d '"' 00:24:56.572 21:47:16 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:56.572 21:47:16 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:24:56.572 21:47:16 -- scripts/common.sh@15 -- # local i 00:24:56.572 21:47:16 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:24:56.572 21:47:16 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:24:56.572 21:47:16 -- scripts/common.sh@24 -- # return 0 00:24:56.572 21:47:16 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:24:56.572 21:47:16 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:24:56.572 21:47:16 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:24:56.572 21:47:16 -- scripts/common.sh@322 -- # uname -s 00:24:56.572 21:47:16 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:24:56.573 21:47:16 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:24:56.573 21:47:16 -- scripts/common.sh@327 -- # (( 1 )) 00:24:56.573 21:47:16 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:24:56.573 21:47:16 -- dd/dd.sh@13 -- # check_liburing 00:24:56.573 21:47:16 -- dd/common.sh@139 -- # local lib so 00:24:56.573 21:47:16 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:24:56.573 21:47:16 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@142 -- # read -r lib _ so _ 00:24:56.573 21:47:16 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:24:56.573 21:47:16 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:24:56.573 * spdk_dd linked to liburing 00:24:56.573 21:47:16 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:24:56.573 21:47:16 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:24:56.573 21:47:16 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:24:56.573 21:47:16 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:24:56.573 21:47:16 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:24:56.573 21:47:16 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:24:56.573 21:47:16 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:24:56.573 21:47:16 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:24:56.573 21:47:16 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:24:56.573 21:47:16 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:24:56.573 21:47:16 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:24:56.573 21:47:16 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:24:56.573 21:47:16 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:24:56.573 21:47:16 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:24:56.573 21:47:16 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:24:56.573 21:47:16 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:24:56.573 21:47:16 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:24:56.573 21:47:16 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:24:56.573 21:47:16 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:24:56.573 21:47:16 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:24:56.573 21:47:16 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:24:56.573 21:47:16 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:24:56.573 21:47:16 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:24:56.573 21:47:16 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:24:56.573 21:47:16 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:24:56.573 21:47:16 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:24:56.573 21:47:16 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:24:56.573 21:47:16 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:24:56.573 21:47:16 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:24:56.573 21:47:16 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:24:56.573 21:47:16 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:24:56.573 21:47:16 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:24:56.573 21:47:16 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:24:56.573 21:47:16 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:24:56.573 21:47:16 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:24:56.573 21:47:16 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:24:56.573 21:47:16 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:24:56.573 21:47:16 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:24:56.573 21:47:16 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:24:56.573 21:47:16 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=y 00:24:56.573 21:47:16 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:24:56.573 21:47:16 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:24:56.573 21:47:16 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:24:56.573 21:47:16 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:24:56.573 21:47:16 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:24:56.573 21:47:16 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:24:56.573 21:47:16 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:24:56.573 21:47:16 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:24:56.573 21:47:16 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:24:56.573 21:47:16 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:24:56.573 21:47:16 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:24:56.573 21:47:16 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:24:56.573 21:47:16 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:24:56.573 21:47:16 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:24:56.573 21:47:16 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:24:56.573 21:47:16 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:24:56.573 21:47:16 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:24:56.573 21:47:16 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:24:56.573 21:47:16 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:24:56.573 21:47:16 -- common/build_config.sh@64 -- # CONFIG_SHARED=n 00:24:56.573 21:47:16 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:24:56.573 21:47:16 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:24:56.573 21:47:16 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:24:56.573 21:47:16 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:24:56.573 21:47:16 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:24:56.573 21:47:16 -- common/build_config.sh@70 -- # CONFIG_RAID5F=y 00:24:56.573 21:47:16 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:24:56.573 21:47:16 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:24:56.573 21:47:16 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:24:56.573 21:47:16 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:24:56.573 21:47:16 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:24:56.573 21:47:16 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:24:56.573 21:47:16 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:24:56.573 21:47:16 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:24:56.573 21:47:16 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:24:56.573 21:47:16 -- dd/common.sh@149 -- # [[ n != y ]] 00:24:56.573 21:47:16 -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n' 00:24:56.573 * spdk_dd built with liburing, but no liburing support requested? 00:24:56.573 21:47:16 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:24:56.573 21:47:16 -- dd/common.sh@156 -- # export liburing_in_use=1 00:24:56.573 21:47:16 -- dd/common.sh@156 -- # liburing_in_use=1 00:24:56.573 21:47:16 -- dd/common.sh@157 -- # return 0 00:24:56.573 21:47:16 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:24:56.573 21:47:16 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:56.573 21:47:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:56.573 21:47:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:56.573 21:47:16 -- common/autotest_common.sh@10 -- # set +x 00:24:56.573 ************************************ 00:24:56.573 START TEST spdk_dd_basic_rw 00:24:56.573 ************************************ 00:24:56.573 21:47:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:24:56.573 * Looking for test storage... 00:24:56.573 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:24:56.573 21:47:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:56.573 21:47:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:56.573 21:47:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:56.832 21:47:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:56.832 21:47:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:56.832 21:47:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:56.832 21:47:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:56.832 21:47:17 -- scripts/common.sh@335 -- # IFS=.-: 00:24:56.832 21:47:17 -- scripts/common.sh@335 -- # read -ra ver1 00:24:56.832 21:47:17 -- scripts/common.sh@336 -- # IFS=.-: 00:24:56.832 21:47:17 -- scripts/common.sh@336 -- # read -ra ver2 00:24:56.832 21:47:17 -- scripts/common.sh@337 -- # local 'op=<' 00:24:56.832 21:47:17 -- scripts/common.sh@339 -- # ver1_l=2 00:24:56.832 21:47:17 -- scripts/common.sh@340 -- # ver2_l=1 00:24:56.832 21:47:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:56.832 21:47:17 -- scripts/common.sh@343 -- # case "$op" in 00:24:56.832 21:47:17 -- scripts/common.sh@344 -- # : 1 00:24:56.832 21:47:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:56.832 21:47:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:56.832 21:47:17 -- scripts/common.sh@364 -- # decimal 1 00:24:56.832 21:47:17 -- scripts/common.sh@352 -- # local d=1 00:24:56.832 21:47:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:56.832 21:47:17 -- scripts/common.sh@354 -- # echo 1 00:24:56.832 21:47:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:56.832 21:47:17 -- scripts/common.sh@365 -- # decimal 2 00:24:56.832 21:47:17 -- scripts/common.sh@352 -- # local d=2 00:24:56.832 21:47:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:56.832 21:47:17 -- scripts/common.sh@354 -- # echo 2 00:24:56.832 21:47:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:56.832 21:47:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:56.832 21:47:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:56.832 21:47:17 -- scripts/common.sh@367 -- # return 0 00:24:56.832 21:47:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:56.832 21:47:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:56.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.832 --rc genhtml_branch_coverage=1 00:24:56.832 --rc genhtml_function_coverage=1 00:24:56.832 --rc genhtml_legend=1 00:24:56.832 --rc geninfo_all_blocks=1 00:24:56.832 --rc geninfo_unexecuted_blocks=1 00:24:56.832 00:24:56.832 ' 00:24:56.832 21:47:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:56.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.832 --rc genhtml_branch_coverage=1 00:24:56.832 --rc genhtml_function_coverage=1 00:24:56.832 --rc genhtml_legend=1 00:24:56.832 --rc geninfo_all_blocks=1 00:24:56.832 --rc geninfo_unexecuted_blocks=1 00:24:56.832 00:24:56.832 ' 00:24:56.832 21:47:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:56.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.832 --rc genhtml_branch_coverage=1 00:24:56.832 --rc genhtml_function_coverage=1 00:24:56.832 --rc genhtml_legend=1 00:24:56.832 --rc geninfo_all_blocks=1 00:24:56.832 --rc geninfo_unexecuted_blocks=1 00:24:56.832 00:24:56.832 ' 00:24:56.832 21:47:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:56.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:56.833 --rc genhtml_branch_coverage=1 00:24:56.833 --rc genhtml_function_coverage=1 00:24:56.833 --rc genhtml_legend=1 00:24:56.833 --rc geninfo_all_blocks=1 00:24:56.833 --rc geninfo_unexecuted_blocks=1 00:24:56.833 00:24:56.833 ' 00:24:56.833 21:47:17 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:56.833 21:47:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.833 21:47:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.833 21:47:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.833 21:47:17 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:56.833 21:47:17 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:56.833 21:47:17 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:56.833 21:47:17 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:56.833 21:47:17 -- paths/export.sh@6 -- # export PATH 00:24:56.833 21:47:17 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:24:56.833 21:47:17 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:24:56.833 21:47:17 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:24:56.833 21:47:17 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:24:56.833 21:47:17 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:24:56.833 21:47:17 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:24:56.833 21:47:17 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:24:56.833 21:47:17 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:24:56.833 21:47:17 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:24:56.833 21:47:17 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:24:56.833 21:47:17 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:24:56.833 21:47:17 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:24:56.833 21:47:17 -- dd/common.sh@126 -- # mapfile -t id 00:24:56.833 21:47:17 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:24:57.093 21:47:17 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2289 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:24:57.093 21:47:17 -- dd/common.sh@130 -- # lbaf=04 00:24:57.094 21:47:17 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 107 Data Units Written: 7 Host Read Commands: 2289 Host Write Commands: 109 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:24:57.094 21:47:17 -- dd/common.sh@132 -- # lbaf=4096 00:24:57.094 21:47:17 -- dd/common.sh@134 -- # echo 4096 00:24:57.094 21:47:17 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:24:57.094 21:47:17 -- dd/basic_rw.sh@96 -- # : 00:24:57.094 21:47:17 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:57.094 21:47:17 -- dd/basic_rw.sh@96 -- # gen_conf 00:24:57.094 21:47:17 -- dd/common.sh@31 -- # xtrace_disable 00:24:57.094 21:47:17 -- common/autotest_common.sh@10 -- # set +x 00:24:57.094 21:47:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:24:57.094 21:47:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.094 21:47:17 -- common/autotest_common.sh@10 -- # set +x 00:24:57.094 ************************************ 00:24:57.094 START TEST dd_bs_lt_native_bs 00:24:57.094 ************************************ 00:24:57.094 21:47:17 -- common/autotest_common.sh@1114 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:57.094 21:47:17 -- common/autotest_common.sh@650 -- # local es=0 00:24:57.094 21:47:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:57.094 { 00:24:57.094 "subsystems": [ 00:24:57.094 { 00:24:57.094 "subsystem": "bdev", 00:24:57.094 "config": [ 00:24:57.094 { 00:24:57.094 "params": { 00:24:57.094 "trtype": "pcie", 00:24:57.094 "traddr": "0000:00:06.0", 00:24:57.094 "name": "Nvme0" 00:24:57.094 }, 00:24:57.094 "method": "bdev_nvme_attach_controller" 00:24:57.094 }, 00:24:57.094 { 00:24:57.094 "method": "bdev_wait_for_examine" 00:24:57.094 } 00:24:57.094 ] 00:24:57.094 } 00:24:57.094 ] 00:24:57.094 } 00:24:57.094 21:47:17 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.094 21:47:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:57.094 21:47:17 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.094 21:47:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:57.094 21:47:17 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.094 21:47:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:57.094 21:47:17 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:57.094 21:47:17 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:24:57.094 21:47:17 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:24:57.094 [2024-12-06 21:47:17.489802] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:57.094 [2024-12-06 21:47:17.489991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87960 ] 00:24:57.352 [2024-12-06 21:47:17.665949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.611 [2024-12-06 21:47:17.904434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.869 [2024-12-06 21:47:18.201253] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:24:57.869 [2024-12-06 21:47:18.201355] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:58.128 [2024-12-06 21:47:18.607304] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:24:58.695 21:47:18 -- common/autotest_common.sh@653 -- # es=234 00:24:58.695 21:47:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:58.695 21:47:18 -- common/autotest_common.sh@662 -- # es=106 00:24:58.695 21:47:18 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:58.695 21:47:18 -- common/autotest_common.sh@670 -- # es=1 00:24:58.695 21:47:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:58.695 00:24:58.695 real 0m1.534s 00:24:58.695 user 0m1.240s 00:24:58.695 sys 0m0.208s 00:24:58.695 21:47:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:58.695 21:47:18 -- common/autotest_common.sh@10 -- # set +x 00:24:58.695 ************************************ 00:24:58.695 END TEST dd_bs_lt_native_bs 00:24:58.695 ************************************ 00:24:58.695 21:47:18 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:24:58.695 21:47:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:58.695 21:47:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:58.695 21:47:18 -- common/autotest_common.sh@10 -- # set +x 00:24:58.695 ************************************ 00:24:58.695 START TEST dd_rw 00:24:58.695 ************************************ 00:24:58.695 21:47:19 -- common/autotest_common.sh@1114 -- # basic_rw 4096 00:24:58.695 21:47:19 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:24:58.695 21:47:19 -- dd/basic_rw.sh@12 -- # local count size 00:24:58.695 21:47:19 -- dd/basic_rw.sh@13 -- # local qds bss 00:24:58.695 21:47:19 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:24:58.695 21:47:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:58.695 21:47:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:58.695 21:47:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:58.695 21:47:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:58.695 21:47:19 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:24:58.695 21:47:19 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:24:58.695 21:47:19 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:24:58.695 21:47:19 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:24:58.695 21:47:19 -- dd/basic_rw.sh@23 -- # count=15 00:24:58.695 21:47:19 -- dd/basic_rw.sh@24 -- # count=15 00:24:58.695 21:47:19 -- dd/basic_rw.sh@25 -- # size=61440 00:24:58.695 21:47:19 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:24:58.695 21:47:19 -- dd/common.sh@98 -- # xtrace_disable 00:24:58.695 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:24:59.263 21:47:19 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:24:59.263 21:47:19 -- dd/basic_rw.sh@30 -- # gen_conf 00:24:59.263 21:47:19 -- dd/common.sh@31 -- # xtrace_disable 00:24:59.263 21:47:19 -- common/autotest_common.sh@10 -- # set +x 00:24:59.263 { 00:24:59.263 "subsystems": [ 00:24:59.263 { 00:24:59.263 "subsystem": "bdev", 00:24:59.263 "config": [ 00:24:59.263 { 00:24:59.263 "params": { 00:24:59.263 "trtype": "pcie", 00:24:59.263 "traddr": "0000:00:06.0", 00:24:59.263 "name": "Nvme0" 00:24:59.263 }, 00:24:59.263 "method": "bdev_nvme_attach_controller" 00:24:59.263 }, 00:24:59.263 { 00:24:59.263 "method": "bdev_wait_for_examine" 00:24:59.263 } 00:24:59.263 ] 00:24:59.263 } 00:24:59.263 ] 00:24:59.263 } 00:24:59.263 [2024-12-06 21:47:19.608947] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:59.263 [2024-12-06 21:47:19.609056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88003 ] 00:24:59.263 [2024-12-06 21:47:19.758230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.522 [2024-12-06 21:47:19.907946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.781  [2024-12-06T21:47:21.216Z] Copying: 60/60 [kB] (average 19 MBps) 00:25:00.719 00:25:00.719 21:47:21 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:25:00.719 21:47:21 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:00.719 21:47:21 -- dd/common.sh@31 -- # xtrace_disable 00:25:00.719 21:47:21 -- common/autotest_common.sh@10 -- # set +x 00:25:00.719 { 00:25:00.719 "subsystems": [ 00:25:00.719 { 00:25:00.720 "subsystem": "bdev", 00:25:00.720 "config": [ 00:25:00.720 { 00:25:00.720 "params": { 00:25:00.720 "trtype": "pcie", 00:25:00.720 "traddr": "0000:00:06.0", 00:25:00.720 "name": "Nvme0" 00:25:00.720 }, 00:25:00.720 "method": "bdev_nvme_attach_controller" 00:25:00.720 }, 00:25:00.720 { 00:25:00.720 "method": "bdev_wait_for_examine" 00:25:00.720 } 00:25:00.720 ] 00:25:00.720 } 00:25:00.720 ] 00:25:00.720 } 00:25:00.720 [2024-12-06 21:47:21.183794] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:00.720 [2024-12-06 21:47:21.183974] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88022 ] 00:25:00.979 [2024-12-06 21:47:21.354221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.239 [2024-12-06 21:47:21.512399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.500  [2024-12-06T21:47:22.565Z] Copying: 60/60 [kB] (average 19 MBps) 00:25:02.068 00:25:02.068 21:47:22 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:02.068 21:47:22 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:02.068 21:47:22 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:02.068 21:47:22 -- dd/common.sh@11 -- # local nvme_ref= 00:25:02.068 21:47:22 -- dd/common.sh@12 -- # local size=61440 00:25:02.068 21:47:22 -- dd/common.sh@14 -- # local bs=1048576 00:25:02.068 21:47:22 -- dd/common.sh@15 -- # local count=1 00:25:02.068 21:47:22 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:02.068 21:47:22 -- dd/common.sh@18 -- # gen_conf 00:25:02.068 21:47:22 -- dd/common.sh@31 -- # xtrace_disable 00:25:02.068 21:47:22 -- common/autotest_common.sh@10 -- # set +x 00:25:02.327 { 00:25:02.327 "subsystems": [ 00:25:02.328 { 00:25:02.328 "subsystem": "bdev", 00:25:02.328 "config": [ 00:25:02.328 { 00:25:02.328 "params": { 00:25:02.328 "trtype": "pcie", 00:25:02.328 "traddr": "0000:00:06.0", 00:25:02.328 "name": "Nvme0" 00:25:02.328 }, 00:25:02.328 "method": "bdev_nvme_attach_controller" 00:25:02.328 }, 00:25:02.328 { 00:25:02.328 "method": "bdev_wait_for_examine" 00:25:02.328 } 00:25:02.328 ] 00:25:02.328 } 00:25:02.328 ] 00:25:02.328 } 00:25:02.328 [2024-12-06 21:47:22.618756] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:02.328 [2024-12-06 21:47:22.618909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88048 ] 00:25:02.328 [2024-12-06 21:47:22.785972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.588 [2024-12-06 21:47:22.936782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.848  [2024-12-06T21:47:24.283Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:25:03.786 00:25:03.786 21:47:24 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:03.786 21:47:24 -- dd/basic_rw.sh@23 -- # count=15 00:25:03.786 21:47:24 -- dd/basic_rw.sh@24 -- # count=15 00:25:03.786 21:47:24 -- dd/basic_rw.sh@25 -- # size=61440 00:25:03.786 21:47:24 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:25:03.786 21:47:24 -- dd/common.sh@98 -- # xtrace_disable 00:25:03.786 21:47:24 -- common/autotest_common.sh@10 -- # set +x 00:25:04.355 21:47:24 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:25:04.356 21:47:24 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:04.356 21:47:24 -- dd/common.sh@31 -- # xtrace_disable 00:25:04.356 21:47:24 -- common/autotest_common.sh@10 -- # set +x 00:25:04.356 { 00:25:04.356 "subsystems": [ 00:25:04.356 { 00:25:04.356 "subsystem": "bdev", 00:25:04.356 "config": [ 00:25:04.356 { 00:25:04.356 "params": { 00:25:04.356 "trtype": "pcie", 00:25:04.356 "traddr": "0000:00:06.0", 00:25:04.356 "name": "Nvme0" 00:25:04.356 }, 00:25:04.356 "method": "bdev_nvme_attach_controller" 00:25:04.356 }, 00:25:04.356 { 00:25:04.356 "method": "bdev_wait_for_examine" 00:25:04.356 } 00:25:04.356 ] 00:25:04.356 } 00:25:04.356 ] 00:25:04.356 } 00:25:04.356 [2024-12-06 21:47:24.748588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:04.356 [2024-12-06 21:47:24.748737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88078 ] 00:25:04.615 [2024-12-06 21:47:24.917879] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.615 [2024-12-06 21:47:25.068170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.874  [2024-12-06T21:47:26.306Z] Copying: 60/60 [kB] (average 58 MBps) 00:25:05.809 00:25:05.809 21:47:26 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:25:05.809 21:47:26 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:05.809 21:47:26 -- dd/common.sh@31 -- # xtrace_disable 00:25:05.809 21:47:26 -- common/autotest_common.sh@10 -- # set +x 00:25:05.809 { 00:25:05.809 "subsystems": [ 00:25:05.809 { 00:25:05.810 "subsystem": "bdev", 00:25:05.810 "config": [ 00:25:05.810 { 00:25:05.810 "params": { 00:25:05.810 "trtype": "pcie", 00:25:05.810 "traddr": "0000:00:06.0", 00:25:05.810 "name": "Nvme0" 00:25:05.810 }, 00:25:05.810 "method": "bdev_nvme_attach_controller" 00:25:05.810 }, 00:25:05.810 { 00:25:05.810 "method": "bdev_wait_for_examine" 00:25:05.810 } 00:25:05.810 ] 00:25:05.810 } 00:25:05.810 ] 00:25:05.810 } 00:25:05.810 [2024-12-06 21:47:26.170016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:05.810 [2024-12-06 21:47:26.170181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88097 ] 00:25:06.070 [2024-12-06 21:47:26.340670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.070 [2024-12-06 21:47:26.490852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.328  [2024-12-06T21:47:27.768Z] Copying: 60/60 [kB] (average 58 MBps) 00:25:07.271 00:25:07.271 21:47:27 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:07.271 21:47:27 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:25:07.271 21:47:27 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:07.271 21:47:27 -- dd/common.sh@11 -- # local nvme_ref= 00:25:07.271 21:47:27 -- dd/common.sh@12 -- # local size=61440 00:25:07.271 21:47:27 -- dd/common.sh@14 -- # local bs=1048576 00:25:07.271 21:47:27 -- dd/common.sh@15 -- # local count=1 00:25:07.271 21:47:27 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:07.271 21:47:27 -- dd/common.sh@18 -- # gen_conf 00:25:07.271 21:47:27 -- dd/common.sh@31 -- # xtrace_disable 00:25:07.271 21:47:27 -- common/autotest_common.sh@10 -- # set +x 00:25:07.271 { 00:25:07.271 "subsystems": [ 00:25:07.271 { 00:25:07.271 "subsystem": "bdev", 00:25:07.271 "config": [ 00:25:07.271 { 00:25:07.271 "params": { 00:25:07.271 "trtype": "pcie", 00:25:07.271 "traddr": "0000:00:06.0", 00:25:07.271 "name": "Nvme0" 00:25:07.271 }, 00:25:07.271 "method": "bdev_nvme_attach_controller" 00:25:07.271 }, 00:25:07.271 { 00:25:07.271 "method": "bdev_wait_for_examine" 00:25:07.271 } 00:25:07.271 ] 00:25:07.271 } 00:25:07.271 ] 00:25:07.271 } 00:25:07.271 [2024-12-06 21:47:27.738610] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:07.271 [2024-12-06 21:47:27.738733] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88123 ] 00:25:07.529 [2024-12-06 21:47:27.889557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.786 [2024-12-06 21:47:28.046773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.045  [2024-12-06T21:47:29.107Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:08.610 00:25:08.610 21:47:29 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:08.610 21:47:29 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:08.610 21:47:29 -- dd/basic_rw.sh@23 -- # count=7 00:25:08.610 21:47:29 -- dd/basic_rw.sh@24 -- # count=7 00:25:08.610 21:47:29 -- dd/basic_rw.sh@25 -- # size=57344 00:25:08.610 21:47:29 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:08.610 21:47:29 -- dd/common.sh@98 -- # xtrace_disable 00:25:08.610 21:47:29 -- common/autotest_common.sh@10 -- # set +x 00:25:09.175 21:47:29 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:25:09.175 21:47:29 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:09.175 21:47:29 -- dd/common.sh@31 -- # xtrace_disable 00:25:09.175 21:47:29 -- common/autotest_common.sh@10 -- # set +x 00:25:09.175 { 00:25:09.175 "subsystems": [ 00:25:09.175 { 00:25:09.175 "subsystem": "bdev", 00:25:09.175 "config": [ 00:25:09.175 { 00:25:09.175 "params": { 00:25:09.175 "trtype": "pcie", 00:25:09.175 "traddr": "0000:00:06.0", 00:25:09.175 "name": "Nvme0" 00:25:09.175 }, 00:25:09.175 "method": "bdev_nvme_attach_controller" 00:25:09.175 }, 00:25:09.175 { 00:25:09.175 "method": "bdev_wait_for_examine" 00:25:09.175 } 00:25:09.175 ] 00:25:09.175 } 00:25:09.175 ] 00:25:09.175 } 00:25:09.175 [2024-12-06 21:47:29.634422] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:09.175 [2024-12-06 21:47:29.634774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88147 ] 00:25:09.433 [2024-12-06 21:47:29.785327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.692 [2024-12-06 21:47:29.935396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.951  [2024-12-06T21:47:31.441Z] Copying: 56/56 [kB] (average 27 MBps) 00:25:10.944 00:25:10.944 21:47:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:25:10.944 21:47:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:10.944 21:47:31 -- dd/common.sh@31 -- # xtrace_disable 00:25:10.944 21:47:31 -- common/autotest_common.sh@10 -- # set +x 00:25:10.944 { 00:25:10.944 "subsystems": [ 00:25:10.944 { 00:25:10.944 "subsystem": "bdev", 00:25:10.944 "config": [ 00:25:10.944 { 00:25:10.944 "params": { 00:25:10.944 "trtype": "pcie", 00:25:10.944 "traddr": "0000:00:06.0", 00:25:10.944 "name": "Nvme0" 00:25:10.944 }, 00:25:10.944 "method": "bdev_nvme_attach_controller" 00:25:10.944 }, 00:25:10.944 { 00:25:10.944 "method": "bdev_wait_for_examine" 00:25:10.944 } 00:25:10.944 ] 00:25:10.944 } 00:25:10.944 ] 00:25:10.944 } 00:25:10.944 [2024-12-06 21:47:31.222315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:10.944 [2024-12-06 21:47:31.222716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88172 ] 00:25:10.944 [2024-12-06 21:47:31.392137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.211 [2024-12-06 21:47:31.547892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.473  [2024-12-06T21:47:32.909Z] Copying: 56/56 [kB] (average 27 MBps) 00:25:12.412 00:25:12.412 21:47:32 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:12.412 21:47:32 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:12.412 21:47:32 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:12.412 21:47:32 -- dd/common.sh@11 -- # local nvme_ref= 00:25:12.412 21:47:32 -- dd/common.sh@12 -- # local size=57344 00:25:12.412 21:47:32 -- dd/common.sh@14 -- # local bs=1048576 00:25:12.412 21:47:32 -- dd/common.sh@15 -- # local count=1 00:25:12.412 21:47:32 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:12.412 21:47:32 -- dd/common.sh@18 -- # gen_conf 00:25:12.412 21:47:32 -- dd/common.sh@31 -- # xtrace_disable 00:25:12.412 21:47:32 -- common/autotest_common.sh@10 -- # set +x 00:25:12.412 { 00:25:12.412 "subsystems": [ 00:25:12.412 { 00:25:12.412 "subsystem": "bdev", 00:25:12.412 "config": [ 00:25:12.412 { 00:25:12.412 "params": { 00:25:12.412 "trtype": "pcie", 00:25:12.412 "traddr": "0000:00:06.0", 00:25:12.412 "name": "Nvme0" 00:25:12.412 }, 00:25:12.412 "method": "bdev_nvme_attach_controller" 00:25:12.412 }, 00:25:12.412 { 00:25:12.412 "method": "bdev_wait_for_examine" 00:25:12.412 } 00:25:12.412 ] 00:25:12.412 } 00:25:12.412 ] 00:25:12.412 } 00:25:12.412 [2024-12-06 21:47:32.670789] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:12.413 [2024-12-06 21:47:32.670940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88192 ] 00:25:12.413 [2024-12-06 21:47:32.841875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.673 [2024-12-06 21:47:33.004367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.931  [2024-12-06T21:47:34.363Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:13.866 00:25:13.866 21:47:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:13.866 21:47:34 -- dd/basic_rw.sh@23 -- # count=7 00:25:13.866 21:47:34 -- dd/basic_rw.sh@24 -- # count=7 00:25:13.866 21:47:34 -- dd/basic_rw.sh@25 -- # size=57344 00:25:13.866 21:47:34 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:25:13.866 21:47:34 -- dd/common.sh@98 -- # xtrace_disable 00:25:13.866 21:47:34 -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 21:47:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:25:14.432 21:47:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:14.432 21:47:34 -- dd/common.sh@31 -- # xtrace_disable 00:25:14.432 21:47:34 -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 { 00:25:14.432 "subsystems": [ 00:25:14.432 { 00:25:14.432 "subsystem": "bdev", 00:25:14.432 "config": [ 00:25:14.432 { 00:25:14.432 "params": { 00:25:14.432 "trtype": "pcie", 00:25:14.432 "traddr": "0000:00:06.0", 00:25:14.432 "name": "Nvme0" 00:25:14.432 }, 00:25:14.432 "method": "bdev_nvme_attach_controller" 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "bdev_wait_for_examine" 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 } 00:25:14.432 [2024-12-06 21:47:34.758176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:14.432 [2024-12-06 21:47:34.758524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88222 ] 00:25:14.432 [2024-12-06 21:47:34.927428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.690 [2024-12-06 21:47:35.077623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.948  [2024-12-06T21:47:36.379Z] Copying: 56/56 [kB] (average 54 MBps) 00:25:15.882 00:25:15.882 21:47:36 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:25:15.882 21:47:36 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:15.882 21:47:36 -- dd/common.sh@31 -- # xtrace_disable 00:25:15.882 21:47:36 -- common/autotest_common.sh@10 -- # set +x 00:25:15.882 { 00:25:15.882 "subsystems": [ 00:25:15.882 { 00:25:15.882 "subsystem": "bdev", 00:25:15.882 "config": [ 00:25:15.882 { 00:25:15.882 "params": { 00:25:15.882 "trtype": "pcie", 00:25:15.882 "traddr": "0000:00:06.0", 00:25:15.882 "name": "Nvme0" 00:25:15.882 }, 00:25:15.882 "method": "bdev_nvme_attach_controller" 00:25:15.882 }, 00:25:15.882 { 00:25:15.882 "method": "bdev_wait_for_examine" 00:25:15.882 } 00:25:15.882 ] 00:25:15.882 } 00:25:15.882 ] 00:25:15.882 } 00:25:15.882 [2024-12-06 21:47:36.185995] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:15.882 [2024-12-06 21:47:36.186151] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88247 ] 00:25:15.882 [2024-12-06 21:47:36.356485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.141 [2024-12-06 21:47:36.512466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.400  [2024-12-06T21:47:37.832Z] Copying: 56/56 [kB] (average 54 MBps) 00:25:17.335 00:25:17.335 21:47:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:17.335 21:47:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:25:17.335 21:47:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:17.335 21:47:37 -- dd/common.sh@11 -- # local nvme_ref= 00:25:17.335 21:47:37 -- dd/common.sh@12 -- # local size=57344 00:25:17.335 21:47:37 -- dd/common.sh@14 -- # local bs=1048576 00:25:17.335 21:47:37 -- dd/common.sh@15 -- # local count=1 00:25:17.335 21:47:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:17.335 21:47:37 -- dd/common.sh@18 -- # gen_conf 00:25:17.335 21:47:37 -- dd/common.sh@31 -- # xtrace_disable 00:25:17.335 21:47:37 -- common/autotest_common.sh@10 -- # set +x 00:25:17.335 { 00:25:17.335 "subsystems": [ 00:25:17.335 { 00:25:17.336 "subsystem": "bdev", 00:25:17.336 "config": [ 00:25:17.336 { 00:25:17.336 "params": { 00:25:17.336 "trtype": "pcie", 00:25:17.336 "traddr": "0000:00:06.0", 00:25:17.336 "name": "Nvme0" 00:25:17.336 }, 00:25:17.336 "method": "bdev_nvme_attach_controller" 00:25:17.336 }, 00:25:17.336 { 00:25:17.336 "method": "bdev_wait_for_examine" 00:25:17.336 } 00:25:17.336 ] 00:25:17.336 } 00:25:17.336 ] 00:25:17.336 } 00:25:17.336 [2024-12-06 21:47:37.760068] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:17.336 [2024-12-06 21:47:37.760326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88271 ] 00:25:17.594 [2024-12-06 21:47:37.910094] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.594 [2024-12-06 21:47:38.061511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.853  [2024-12-06T21:47:39.313Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:18.816 00:25:18.816 21:47:39 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:25:18.816 21:47:39 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:18.816 21:47:39 -- dd/basic_rw.sh@23 -- # count=3 00:25:18.816 21:47:39 -- dd/basic_rw.sh@24 -- # count=3 00:25:18.816 21:47:39 -- dd/basic_rw.sh@25 -- # size=49152 00:25:18.816 21:47:39 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:18.816 21:47:39 -- dd/common.sh@98 -- # xtrace_disable 00:25:18.816 21:47:39 -- common/autotest_common.sh@10 -- # set +x 00:25:19.076 21:47:39 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:25:19.076 21:47:39 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:19.076 21:47:39 -- dd/common.sh@31 -- # xtrace_disable 00:25:19.076 21:47:39 -- common/autotest_common.sh@10 -- # set +x 00:25:19.076 { 00:25:19.076 "subsystems": [ 00:25:19.076 { 00:25:19.076 "subsystem": "bdev", 00:25:19.076 "config": [ 00:25:19.076 { 00:25:19.076 "params": { 00:25:19.076 "trtype": "pcie", 00:25:19.076 "traddr": "0000:00:06.0", 00:25:19.076 "name": "Nvme0" 00:25:19.076 }, 00:25:19.076 "method": "bdev_nvme_attach_controller" 00:25:19.076 }, 00:25:19.076 { 00:25:19.076 "method": "bdev_wait_for_examine" 00:25:19.076 } 00:25:19.076 ] 00:25:19.076 } 00:25:19.076 ] 00:25:19.076 } 00:25:19.335 [2024-12-06 21:47:39.587266] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:19.335 [2024-12-06 21:47:39.587562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88297 ] 00:25:19.335 [2024-12-06 21:47:39.737072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.594 [2024-12-06 21:47:39.894048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.853  [2024-12-06T21:47:41.288Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:20.791 00:25:20.791 21:47:41 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:20.791 21:47:41 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:25:20.791 21:47:41 -- dd/common.sh@31 -- # xtrace_disable 00:25:20.791 21:47:41 -- common/autotest_common.sh@10 -- # set +x 00:25:20.791 { 00:25:20.791 "subsystems": [ 00:25:20.791 { 00:25:20.791 "subsystem": "bdev", 00:25:20.791 "config": [ 00:25:20.791 { 00:25:20.791 "params": { 00:25:20.791 "trtype": "pcie", 00:25:20.791 "traddr": "0000:00:06.0", 00:25:20.791 "name": "Nvme0" 00:25:20.791 }, 00:25:20.791 "method": "bdev_nvme_attach_controller" 00:25:20.791 }, 00:25:20.791 { 00:25:20.791 "method": "bdev_wait_for_examine" 00:25:20.791 } 00:25:20.791 ] 00:25:20.791 } 00:25:20.791 ] 00:25:20.791 } 00:25:20.791 [2024-12-06 21:47:41.160046] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:20.791 [2024-12-06 21:47:41.160200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88322 ] 00:25:21.051 [2024-12-06 21:47:41.328784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.051 [2024-12-06 21:47:41.486904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.310  [2024-12-06T21:47:42.746Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:22.249 00:25:22.249 21:47:42 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:22.249 21:47:42 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:22.249 21:47:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:22.249 21:47:42 -- dd/common.sh@11 -- # local nvme_ref= 00:25:22.249 21:47:42 -- dd/common.sh@12 -- # local size=49152 00:25:22.249 21:47:42 -- dd/common.sh@14 -- # local bs=1048576 00:25:22.249 21:47:42 -- dd/common.sh@15 -- # local count=1 00:25:22.249 21:47:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:22.249 21:47:42 -- dd/common.sh@18 -- # gen_conf 00:25:22.249 21:47:42 -- dd/common.sh@31 -- # xtrace_disable 00:25:22.249 21:47:42 -- common/autotest_common.sh@10 -- # set +x 00:25:22.249 { 00:25:22.249 "subsystems": [ 00:25:22.249 { 00:25:22.249 "subsystem": "bdev", 00:25:22.249 "config": [ 00:25:22.249 { 00:25:22.249 "params": { 00:25:22.249 "trtype": "pcie", 00:25:22.249 "traddr": "0000:00:06.0", 00:25:22.249 "name": "Nvme0" 00:25:22.249 }, 00:25:22.249 "method": "bdev_nvme_attach_controller" 00:25:22.249 }, 00:25:22.249 { 00:25:22.249 "method": "bdev_wait_for_examine" 00:25:22.249 } 00:25:22.249 ] 00:25:22.249 } 00:25:22.249 ] 00:25:22.249 } 00:25:22.249 [2024-12-06 21:47:42.678519] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:22.249 [2024-12-06 21:47:42.679043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88343 ] 00:25:22.508 [2024-12-06 21:47:42.848630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.508 [2024-12-06 21:47:43.001777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.077  [2024-12-06T21:47:44.511Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:24.014 00:25:24.014 21:47:44 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:25:24.014 21:47:44 -- dd/basic_rw.sh@23 -- # count=3 00:25:24.014 21:47:44 -- dd/basic_rw.sh@24 -- # count=3 00:25:24.014 21:47:44 -- dd/basic_rw.sh@25 -- # size=49152 00:25:24.014 21:47:44 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:25:24.014 21:47:44 -- dd/common.sh@98 -- # xtrace_disable 00:25:24.014 21:47:44 -- common/autotest_common.sh@10 -- # set +x 00:25:24.273 21:47:44 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:25:24.273 21:47:44 -- dd/basic_rw.sh@30 -- # gen_conf 00:25:24.273 21:47:44 -- dd/common.sh@31 -- # xtrace_disable 00:25:24.273 21:47:44 -- common/autotest_common.sh@10 -- # set +x 00:25:24.273 { 00:25:24.273 "subsystems": [ 00:25:24.273 { 00:25:24.273 "subsystem": "bdev", 00:25:24.273 "config": [ 00:25:24.273 { 00:25:24.273 "params": { 00:25:24.273 "trtype": "pcie", 00:25:24.273 "traddr": "0000:00:06.0", 00:25:24.273 "name": "Nvme0" 00:25:24.273 }, 00:25:24.273 "method": "bdev_nvme_attach_controller" 00:25:24.273 }, 00:25:24.273 { 00:25:24.273 "method": "bdev_wait_for_examine" 00:25:24.273 } 00:25:24.273 ] 00:25:24.273 } 00:25:24.273 ] 00:25:24.273 } 00:25:24.273 [2024-12-06 21:47:44.623587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:24.273 [2024-12-06 21:47:44.623909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88372 ] 00:25:24.533 [2024-12-06 21:47:44.772935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.533 [2024-12-06 21:47:44.926693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.795  [2024-12-06T21:47:46.229Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:25.732 00:25:25.732 21:47:45 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:25:25.732 21:47:45 -- dd/basic_rw.sh@37 -- # gen_conf 00:25:25.732 21:47:45 -- dd/common.sh@31 -- # xtrace_disable 00:25:25.732 21:47:45 -- common/autotest_common.sh@10 -- # set +x 00:25:25.732 { 00:25:25.732 "subsystems": [ 00:25:25.732 { 00:25:25.732 "subsystem": "bdev", 00:25:25.732 "config": [ 00:25:25.732 { 00:25:25.732 "params": { 00:25:25.732 "trtype": "pcie", 00:25:25.732 "traddr": "0000:00:06.0", 00:25:25.732 "name": "Nvme0" 00:25:25.732 }, 00:25:25.732 "method": "bdev_nvme_attach_controller" 00:25:25.732 }, 00:25:25.732 { 00:25:25.732 "method": "bdev_wait_for_examine" 00:25:25.732 } 00:25:25.732 ] 00:25:25.732 } 00:25:25.732 ] 00:25:25.732 } 00:25:25.732 [2024-12-06 21:47:46.018111] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:25.732 [2024-12-06 21:47:46.018385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88397 ] 00:25:25.732 [2024-12-06 21:47:46.169840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.991 [2024-12-06 21:47:46.318237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.251  [2024-12-06T21:47:47.687Z] Copying: 48/48 [kB] (average 46 MBps) 00:25:27.190 00:25:27.190 21:47:47 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:27.190 21:47:47 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:25:27.190 21:47:47 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:27.190 21:47:47 -- dd/common.sh@11 -- # local nvme_ref= 00:25:27.190 21:47:47 -- dd/common.sh@12 -- # local size=49152 00:25:27.190 21:47:47 -- dd/common.sh@14 -- # local bs=1048576 00:25:27.190 21:47:47 -- dd/common.sh@15 -- # local count=1 00:25:27.190 21:47:47 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:27.190 21:47:47 -- dd/common.sh@18 -- # gen_conf 00:25:27.190 21:47:47 -- dd/common.sh@31 -- # xtrace_disable 00:25:27.190 21:47:47 -- common/autotest_common.sh@10 -- # set +x 00:25:27.190 { 00:25:27.190 "subsystems": [ 00:25:27.190 { 00:25:27.190 "subsystem": "bdev", 00:25:27.190 "config": [ 00:25:27.190 { 00:25:27.190 "params": { 00:25:27.190 "trtype": "pcie", 00:25:27.190 "traddr": "0000:00:06.0", 00:25:27.190 "name": "Nvme0" 00:25:27.190 }, 00:25:27.190 "method": "bdev_nvme_attach_controller" 00:25:27.190 }, 00:25:27.190 { 00:25:27.190 "method": "bdev_wait_for_examine" 00:25:27.190 } 00:25:27.190 ] 00:25:27.190 } 00:25:27.190 ] 00:25:27.190 } 00:25:27.190 [2024-12-06 21:47:47.575580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:27.190 [2024-12-06 21:47:47.575722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88418 ] 00:25:27.450 [2024-12-06 21:47:47.725334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.450 [2024-12-06 21:47:47.876561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.709  [2024-12-06T21:47:49.143Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:28.646 00:25:28.646 00:25:28.646 real 0m30.007s 00:25:28.646 user 0m24.575s 00:25:28.646 sys 0m3.744s 00:25:28.646 21:47:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:28.646 ************************************ 00:25:28.646 END TEST dd_rw 00:25:28.646 ************************************ 00:25:28.646 21:47:49 -- common/autotest_common.sh@10 -- # set +x 00:25:28.646 21:47:49 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:25:28.646 21:47:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:28.646 21:47:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:28.646 21:47:49 -- common/autotest_common.sh@10 -- # set +x 00:25:28.646 ************************************ 00:25:28.646 START TEST dd_rw_offset 00:25:28.646 ************************************ 00:25:28.646 21:47:49 -- common/autotest_common.sh@1114 -- # basic_offset 00:25:28.646 21:47:49 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:25:28.646 21:47:49 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:25:28.646 21:47:49 -- dd/common.sh@98 -- # xtrace_disable 00:25:28.646 21:47:49 -- common/autotest_common.sh@10 -- # set +x 00:25:28.646 21:47:49 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:25:28.646 21:47:49 -- dd/basic_rw.sh@56 -- # data=aiwe8qzpb78exqumcnny43x8qecxpdoqkx3qsgzok4b4xqkzwdhoakw58ut2ff8ig3gmbsywrdvvs29i89e8jowfjo0rxfd7d4emfck7kltuiyhf06iwostjbxg2hx42sylmj7c8xmitlto57uvzn3hzzw5mlhkfmacm17yuqfkr7fx1ivehwewnsizg81qdmph3bdruzmtc5ssz6hiwgxglgu2mpbo8tuulogtji8e17bbqlnavx2jdq4n1vnuthujnxjxzag22oiuivnwjzae2kvc6w0732dgpmutgbp9cwe2lchsea2s1aalwfbjp5bwd04w5y9p1g59k823d8gvv6r8tfht8zgd5j0gq7ftwt7we1g4iiq36zhudptpuz2vy35upk29oek1tc87z6cn3pdk5mbyf3iy1yf1b1mcc5b4duivkh8hzil9xpn3og0tcqticrmu2n4ow6bhth30c249xge0xlcw0b2u2exmgqgxhkigazpsx1d2kx2y8zxx41zi2rma0cx1juvu60uwce5vldlaprp7jncjiatdeytftq2kvar7akgqeso6zf30neu311wfy4il8posvmoy34hjgnpuvn6r38anlsfuc3raif80x8423ccw4nogwqf53v2fxl5mtny9azxt21o7ek90epd0liu1rp2wh4hu3ft7ogbxdnad0cyxymlxgmeq0zcyf52jkvq4dwum39j6saemlpbk9a8rwri6ahs9xgn5fqjgkbhszyrk64523068j2z9ubh3o84c6uvyevtxlnfz0hgh4nb47hy1t282ux249lmaanerqc1iimcu0h4sj071m3z4kr332wjwtbv2stdxsstf8xwxf2ixjbev45jl0yekk9blqj2nncwdvyzol2zk2p3m9qps3x928wrf0aucosn95xcq1byayvqv4jdiqr0m8zz3djdkcwsps6iaulecb2gj2q0p5quctxdpm1n8gqxv8tfh2crs781663n70f9g1b8t2o1db0xrsnxob2e93tm7ieg4g83v9qqjkooahaboxavs71hwqz3j2punzv5y0bunwv1tqi7t7xzq80u3fl7fkz2ulfax4tvc5lyd4rfzmgcmfq323mk7t54qnmlrkvx6ag6rt1gze74edam5uoqu88s69fn513e2m8r52sn9tr60zgng9x13c46eqj4poqlf1yi4yrzz9ozeorowrqtxaaofop4yrkirmu0bmufg9fcdj9uvcuyiuy35xeh3g6chmanugcpd6df8wz309lpe97z0qmkflwu5cm76mzhiii84ltjwyod75ultbyeqyrjm19yh9fmn0zashy3eg3c1doctyz3a0c8wwcnsk6cn3bl0az72yidn0nn4ek9jw39njhsrl22wn7xatuolqrb1l6b1u7eb835i7qmik2h2cma6bgqhylnotbwcgtg9rbxr0k1hxgqw5c2q0jnnx022vd10wjwczcpx09jzmibz987htzxp7v6wbo20iaj57dduemhdgtju2xwlfbi30owo0ge1e52qhozv01wm8h8lwwcaqz63iodac9lb9ci3a6oom7pw2j95kol5ucpvml4m34cp71pg750n913ylnxkh3vspce3142b93cus9sqfu8mk51a9hjwek0ikx57itvui5h75ojp5t6qy0sz40b2p6mkhbintaai922ttkwosm6r091i746h778ci9tvlahggtw4dqlw1yw62crqvqhionsd6ae543u9hmudmv09h5lp4nyhbd8qu1dohokwjj8zrnwxbdt1z06ng1k69u86gqh44ecl9plybqu1tftykhyd4ypfh5chj4r0cly15n4p65eftdcblv4z2gtiqu159zsofe6daluqmhzy4wslqfg9t87ptz41rqdkxv6wgxu4kovunnp32cknbwqha2rgqik4xbgwtofsa5b3sscrv10qb8ahi4rqc746qes6l8k6a2decbs60w8yprkz1s9btoezduyd1jkpjus0fahadt3rgy2cojdsthqlhu8l36j9i1b2v6b4cx6jrgivpg21vxvzeu67wni4fo04e2w7oeg72614sn35qep71ze7kr3av6hakb836sahr7oscgptvywt4lsavqwuznabp6nd12777xl53kwvxmhn8nbw0dolf9kg77q7eufth5ua7axhfrhwyz6cqkzac4e0ea7vzwbbghav64sv8ksq3ws8xh0xrgwi7vzeoap1gxptzh918rcu4mu6qihb4nzmciin21h3wy1pmyuol0mhfpgfuuupi3mt28z2uyxokzbgn81mlbcs0c53ic92c6ejtuqvsgg9bbjhvn5wnq38l318rokgjnpvpe6mj51qxna3pj8g3v8scq43czv4fg7kbdf34w7j88e9ikyye09r65io8r7gv7gsbky62avawb2iirk5y2wtdgp7j199hk940po4c4ff23w1dsbx8x80k16ofox3l6uwzwpyjc7d2s4okgodo8py9373ph4fgp52rbppobz40gf46631thgw0xah2lbhuu71oqeudk6q3y4x6ai5sdyenckmqw56niooi0a50yb73yjbw6wlwdn5vdjrncosuxdwtvy78kbonj3u04l7wbgchreszebd0icstex6eej4hpqzvv5xfc4q8uq60h4kodsnde72ci82ovvt0ehzmuanbnkfc8p5q1bo56zypseui6jnk1lkwjopqk1u4ljsasbvb5igkcugxyhenuznlp7jznp30adroztxbb766vmze1p4us345m3rqkx8ox3eo7vq1sgva6k6dj8ezryfu3b8rokhtf9rxuup6ahh7zh6ldw8jjxezexuecq01kuq0gb6lwybympdfcmxq8vj7pnatv0asc0fqttacwer9iqez40mxdmruqfj0n8yrqs8vzy8xk33oml8s3l5fqqdtpztg9yzeyxa3otzpt4edca85z4wmuetgnf6ar3i65h1a413bxzie57ybdu1z5m8i0g5g4pgcep8l79l36779nuynkdfg6i7s0m53a9o7b3zjanh2kjfec6vak7zpklli47i0luguepbfmbn1m8wfoabuo01vd44kal07vnugam66plxhxyx62vbw51h97oipy6wh0mo8camkv6h8e4y8aacgjc3tb5inxanb3fo1izrdoe409huum6bpa9h3iefsl5nb1bh9ltbed19jnvix0hvaxnngherh1kkdd8j2527qwclospyu29bgb5hzw1vhrnmwc45bv2qy5q9r4gl158wr26nful39je8os9d475i9khzva1ili645bg32u0zsbt0262m0pnwx99we81z9cvvuus1zmcq9yd1fc8p6vq7727mj8x1p9teqi0ffm22dmah3f7igxw9d2gwih0svy1flok6cha15vjnwfqvdsxxyhpeuad97uabcgq5p5lji0k8giaia2pkplop6k4ixityritw7z2asmcsju8ma3ut6p2cjfe8laf0gjxdzapgdcs5bg2hhjv53t9k0yxc2plvnxpkz1x039xt0mh34lhuiymzy2mt326ufaub6dt4xjeck1aoyiocwimcfgx2n8yq30bzr1p4m6sqrago4fvdw3v1pkxt0wkt9waeaesfams8knshzmsutlt13rhk5woy9lyl9t0u2z9eltjx8cetghyujuv8i4vwj8ih19kv8bamksizu58temz4qsc7fc0pugjksqafd1iy7xvrhfpasd158aost8q9fcctcj6izfug26kghwshfibjuk2wksycsf5tp69i6xkx66sprwwe29ln1c8otqc6e0o0lfnrgfm8ol7bynrvhsyh1xjmn4xaoreenqrpnxggy1gad2mdgigi4uzv99oss09kawzflhob1nx2v1s4vh4la97kafoscmjd6aj37epg0hqh3twx18ll83zno1ak6h0up5pb5vy54nq1w31va64xip9pludpe2u5gk4lri1oz6umqzja0gs8sfury1ba7qmtq8romn163u1rba9djypejh8sztcinuf5huxiq 00:25:28.646 21:47:49 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:25:28.646 21:47:49 -- dd/basic_rw.sh@59 -- # gen_conf 00:25:28.646 21:47:49 -- dd/common.sh@31 -- # xtrace_disable 00:25:28.646 21:47:49 -- common/autotest_common.sh@10 -- # set +x 00:25:28.646 { 00:25:28.646 "subsystems": [ 00:25:28.646 { 00:25:28.646 "subsystem": "bdev", 00:25:28.646 "config": [ 00:25:28.646 { 00:25:28.646 "params": { 00:25:28.646 "trtype": "pcie", 00:25:28.646 "traddr": "0000:00:06.0", 00:25:28.646 "name": "Nvme0" 00:25:28.646 }, 00:25:28.646 "method": "bdev_nvme_attach_controller" 00:25:28.646 }, 00:25:28.646 { 00:25:28.646 "method": "bdev_wait_for_examine" 00:25:28.646 } 00:25:28.646 ] 00:25:28.646 } 00:25:28.646 ] 00:25:28.646 } 00:25:28.905 [2024-12-06 21:47:49.178470] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:28.906 [2024-12-06 21:47:49.178628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88464 ] 00:25:28.906 [2024-12-06 21:47:49.347134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.165 [2024-12-06 21:47:49.497584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.424  [2024-12-06T21:47:50.858Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:25:30.361 00:25:30.361 21:47:50 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:25:30.361 21:47:50 -- dd/basic_rw.sh@65 -- # gen_conf 00:25:30.361 21:47:50 -- dd/common.sh@31 -- # xtrace_disable 00:25:30.361 21:47:50 -- common/autotest_common.sh@10 -- # set +x 00:25:30.361 { 00:25:30.361 "subsystems": [ 00:25:30.361 { 00:25:30.361 "subsystem": "bdev", 00:25:30.361 "config": [ 00:25:30.361 { 00:25:30.361 "params": { 00:25:30.361 "trtype": "pcie", 00:25:30.361 "traddr": "0000:00:06.0", 00:25:30.361 "name": "Nvme0" 00:25:30.361 }, 00:25:30.361 "method": "bdev_nvme_attach_controller" 00:25:30.361 }, 00:25:30.361 { 00:25:30.361 "method": "bdev_wait_for_examine" 00:25:30.361 } 00:25:30.361 ] 00:25:30.361 } 00:25:30.361 ] 00:25:30.361 } 00:25:30.361 [2024-12-06 21:47:50.772881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:30.361 [2024-12-06 21:47:50.773036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88483 ] 00:25:30.621 [2024-12-06 21:47:50.942127] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.621 [2024-12-06 21:47:51.098413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.189  [2024-12-06T21:47:52.255Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:25:31.758 00:25:31.758 21:47:52 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:25:31.758 21:47:52 -- dd/basic_rw.sh@72 -- # [[ aiwe8qzpb78exqumcnny43x8qecxpdoqkx3qsgzok4b4xqkzwdhoakw58ut2ff8ig3gmbsywrdvvs29i89e8jowfjo0rxfd7d4emfck7kltuiyhf06iwostjbxg2hx42sylmj7c8xmitlto57uvzn3hzzw5mlhkfmacm17yuqfkr7fx1ivehwewnsizg81qdmph3bdruzmtc5ssz6hiwgxglgu2mpbo8tuulogtji8e17bbqlnavx2jdq4n1vnuthujnxjxzag22oiuivnwjzae2kvc6w0732dgpmutgbp9cwe2lchsea2s1aalwfbjp5bwd04w5y9p1g59k823d8gvv6r8tfht8zgd5j0gq7ftwt7we1g4iiq36zhudptpuz2vy35upk29oek1tc87z6cn3pdk5mbyf3iy1yf1b1mcc5b4duivkh8hzil9xpn3og0tcqticrmu2n4ow6bhth30c249xge0xlcw0b2u2exmgqgxhkigazpsx1d2kx2y8zxx41zi2rma0cx1juvu60uwce5vldlaprp7jncjiatdeytftq2kvar7akgqeso6zf30neu311wfy4il8posvmoy34hjgnpuvn6r38anlsfuc3raif80x8423ccw4nogwqf53v2fxl5mtny9azxt21o7ek90epd0liu1rp2wh4hu3ft7ogbxdnad0cyxymlxgmeq0zcyf52jkvq4dwum39j6saemlpbk9a8rwri6ahs9xgn5fqjgkbhszyrk64523068j2z9ubh3o84c6uvyevtxlnfz0hgh4nb47hy1t282ux249lmaanerqc1iimcu0h4sj071m3z4kr332wjwtbv2stdxsstf8xwxf2ixjbev45jl0yekk9blqj2nncwdvyzol2zk2p3m9qps3x928wrf0aucosn95xcq1byayvqv4jdiqr0m8zz3djdkcwsps6iaulecb2gj2q0p5quctxdpm1n8gqxv8tfh2crs781663n70f9g1b8t2o1db0xrsnxob2e93tm7ieg4g83v9qqjkooahaboxavs71hwqz3j2punzv5y0bunwv1tqi7t7xzq80u3fl7fkz2ulfax4tvc5lyd4rfzmgcmfq323mk7t54qnmlrkvx6ag6rt1gze74edam5uoqu88s69fn513e2m8r52sn9tr60zgng9x13c46eqj4poqlf1yi4yrzz9ozeorowrqtxaaofop4yrkirmu0bmufg9fcdj9uvcuyiuy35xeh3g6chmanugcpd6df8wz309lpe97z0qmkflwu5cm76mzhiii84ltjwyod75ultbyeqyrjm19yh9fmn0zashy3eg3c1doctyz3a0c8wwcnsk6cn3bl0az72yidn0nn4ek9jw39njhsrl22wn7xatuolqrb1l6b1u7eb835i7qmik2h2cma6bgqhylnotbwcgtg9rbxr0k1hxgqw5c2q0jnnx022vd10wjwczcpx09jzmibz987htzxp7v6wbo20iaj57dduemhdgtju2xwlfbi30owo0ge1e52qhozv01wm8h8lwwcaqz63iodac9lb9ci3a6oom7pw2j95kol5ucpvml4m34cp71pg750n913ylnxkh3vspce3142b93cus9sqfu8mk51a9hjwek0ikx57itvui5h75ojp5t6qy0sz40b2p6mkhbintaai922ttkwosm6r091i746h778ci9tvlahggtw4dqlw1yw62crqvqhionsd6ae543u9hmudmv09h5lp4nyhbd8qu1dohokwjj8zrnwxbdt1z06ng1k69u86gqh44ecl9plybqu1tftykhyd4ypfh5chj4r0cly15n4p65eftdcblv4z2gtiqu159zsofe6daluqmhzy4wslqfg9t87ptz41rqdkxv6wgxu4kovunnp32cknbwqha2rgqik4xbgwtofsa5b3sscrv10qb8ahi4rqc746qes6l8k6a2decbs60w8yprkz1s9btoezduyd1jkpjus0fahadt3rgy2cojdsthqlhu8l36j9i1b2v6b4cx6jrgivpg21vxvzeu67wni4fo04e2w7oeg72614sn35qep71ze7kr3av6hakb836sahr7oscgptvywt4lsavqwuznabp6nd12777xl53kwvxmhn8nbw0dolf9kg77q7eufth5ua7axhfrhwyz6cqkzac4e0ea7vzwbbghav64sv8ksq3ws8xh0xrgwi7vzeoap1gxptzh918rcu4mu6qihb4nzmciin21h3wy1pmyuol0mhfpgfuuupi3mt28z2uyxokzbgn81mlbcs0c53ic92c6ejtuqvsgg9bbjhvn5wnq38l318rokgjnpvpe6mj51qxna3pj8g3v8scq43czv4fg7kbdf34w7j88e9ikyye09r65io8r7gv7gsbky62avawb2iirk5y2wtdgp7j199hk940po4c4ff23w1dsbx8x80k16ofox3l6uwzwpyjc7d2s4okgodo8py9373ph4fgp52rbppobz40gf46631thgw0xah2lbhuu71oqeudk6q3y4x6ai5sdyenckmqw56niooi0a50yb73yjbw6wlwdn5vdjrncosuxdwtvy78kbonj3u04l7wbgchreszebd0icstex6eej4hpqzvv5xfc4q8uq60h4kodsnde72ci82ovvt0ehzmuanbnkfc8p5q1bo56zypseui6jnk1lkwjopqk1u4ljsasbvb5igkcugxyhenuznlp7jznp30adroztxbb766vmze1p4us345m3rqkx8ox3eo7vq1sgva6k6dj8ezryfu3b8rokhtf9rxuup6ahh7zh6ldw8jjxezexuecq01kuq0gb6lwybympdfcmxq8vj7pnatv0asc0fqttacwer9iqez40mxdmruqfj0n8yrqs8vzy8xk33oml8s3l5fqqdtpztg9yzeyxa3otzpt4edca85z4wmuetgnf6ar3i65h1a413bxzie57ybdu1z5m8i0g5g4pgcep8l79l36779nuynkdfg6i7s0m53a9o7b3zjanh2kjfec6vak7zpklli47i0luguepbfmbn1m8wfoabuo01vd44kal07vnugam66plxhxyx62vbw51h97oipy6wh0mo8camkv6h8e4y8aacgjc3tb5inxanb3fo1izrdoe409huum6bpa9h3iefsl5nb1bh9ltbed19jnvix0hvaxnngherh1kkdd8j2527qwclospyu29bgb5hzw1vhrnmwc45bv2qy5q9r4gl158wr26nful39je8os9d475i9khzva1ili645bg32u0zsbt0262m0pnwx99we81z9cvvuus1zmcq9yd1fc8p6vq7727mj8x1p9teqi0ffm22dmah3f7igxw9d2gwih0svy1flok6cha15vjnwfqvdsxxyhpeuad97uabcgq5p5lji0k8giaia2pkplop6k4ixityritw7z2asmcsju8ma3ut6p2cjfe8laf0gjxdzapgdcs5bg2hhjv53t9k0yxc2plvnxpkz1x039xt0mh34lhuiymzy2mt326ufaub6dt4xjeck1aoyiocwimcfgx2n8yq30bzr1p4m6sqrago4fvdw3v1pkxt0wkt9waeaesfams8knshzmsutlt13rhk5woy9lyl9t0u2z9eltjx8cetghyujuv8i4vwj8ih19kv8bamksizu58temz4qsc7fc0pugjksqafd1iy7xvrhfpasd158aost8q9fcctcj6izfug26kghwshfibjuk2wksycsf5tp69i6xkx66sprwwe29ln1c8otqc6e0o0lfnrgfm8ol7bynrvhsyh1xjmn4xaoreenqrpnxggy1gad2mdgigi4uzv99oss09kawzflhob1nx2v1s4vh4la97kafoscmjd6aj37epg0hqh3twx18ll83zno1ak6h0up5pb5vy54nq1w31va64xip9pludpe2u5gk4lri1oz6umqzja0gs8sfury1ba7qmtq8romn163u1rba9djypejh8sztcinuf5huxiq == \a\i\w\e\8\q\z\p\b\7\8\e\x\q\u\m\c\n\n\y\4\3\x\8\q\e\c\x\p\d\o\q\k\x\3\q\s\g\z\o\k\4\b\4\x\q\k\z\w\d\h\o\a\k\w\5\8\u\t\2\f\f\8\i\g\3\g\m\b\s\y\w\r\d\v\v\s\2\9\i\8\9\e\8\j\o\w\f\j\o\0\r\x\f\d\7\d\4\e\m\f\c\k\7\k\l\t\u\i\y\h\f\0\6\i\w\o\s\t\j\b\x\g\2\h\x\4\2\s\y\l\m\j\7\c\8\x\m\i\t\l\t\o\5\7\u\v\z\n\3\h\z\z\w\5\m\l\h\k\f\m\a\c\m\1\7\y\u\q\f\k\r\7\f\x\1\i\v\e\h\w\e\w\n\s\i\z\g\8\1\q\d\m\p\h\3\b\d\r\u\z\m\t\c\5\s\s\z\6\h\i\w\g\x\g\l\g\u\2\m\p\b\o\8\t\u\u\l\o\g\t\j\i\8\e\1\7\b\b\q\l\n\a\v\x\2\j\d\q\4\n\1\v\n\u\t\h\u\j\n\x\j\x\z\a\g\2\2\o\i\u\i\v\n\w\j\z\a\e\2\k\v\c\6\w\0\7\3\2\d\g\p\m\u\t\g\b\p\9\c\w\e\2\l\c\h\s\e\a\2\s\1\a\a\l\w\f\b\j\p\5\b\w\d\0\4\w\5\y\9\p\1\g\5\9\k\8\2\3\d\8\g\v\v\6\r\8\t\f\h\t\8\z\g\d\5\j\0\g\q\7\f\t\w\t\7\w\e\1\g\4\i\i\q\3\6\z\h\u\d\p\t\p\u\z\2\v\y\3\5\u\p\k\2\9\o\e\k\1\t\c\8\7\z\6\c\n\3\p\d\k\5\m\b\y\f\3\i\y\1\y\f\1\b\1\m\c\c\5\b\4\d\u\i\v\k\h\8\h\z\i\l\9\x\p\n\3\o\g\0\t\c\q\t\i\c\r\m\u\2\n\4\o\w\6\b\h\t\h\3\0\c\2\4\9\x\g\e\0\x\l\c\w\0\b\2\u\2\e\x\m\g\q\g\x\h\k\i\g\a\z\p\s\x\1\d\2\k\x\2\y\8\z\x\x\4\1\z\i\2\r\m\a\0\c\x\1\j\u\v\u\6\0\u\w\c\e\5\v\l\d\l\a\p\r\p\7\j\n\c\j\i\a\t\d\e\y\t\f\t\q\2\k\v\a\r\7\a\k\g\q\e\s\o\6\z\f\3\0\n\e\u\3\1\1\w\f\y\4\i\l\8\p\o\s\v\m\o\y\3\4\h\j\g\n\p\u\v\n\6\r\3\8\a\n\l\s\f\u\c\3\r\a\i\f\8\0\x\8\4\2\3\c\c\w\4\n\o\g\w\q\f\5\3\v\2\f\x\l\5\m\t\n\y\9\a\z\x\t\2\1\o\7\e\k\9\0\e\p\d\0\l\i\u\1\r\p\2\w\h\4\h\u\3\f\t\7\o\g\b\x\d\n\a\d\0\c\y\x\y\m\l\x\g\m\e\q\0\z\c\y\f\5\2\j\k\v\q\4\d\w\u\m\3\9\j\6\s\a\e\m\l\p\b\k\9\a\8\r\w\r\i\6\a\h\s\9\x\g\n\5\f\q\j\g\k\b\h\s\z\y\r\k\6\4\5\2\3\0\6\8\j\2\z\9\u\b\h\3\o\8\4\c\6\u\v\y\e\v\t\x\l\n\f\z\0\h\g\h\4\n\b\4\7\h\y\1\t\2\8\2\u\x\2\4\9\l\m\a\a\n\e\r\q\c\1\i\i\m\c\u\0\h\4\s\j\0\7\1\m\3\z\4\k\r\3\3\2\w\j\w\t\b\v\2\s\t\d\x\s\s\t\f\8\x\w\x\f\2\i\x\j\b\e\v\4\5\j\l\0\y\e\k\k\9\b\l\q\j\2\n\n\c\w\d\v\y\z\o\l\2\z\k\2\p\3\m\9\q\p\s\3\x\9\2\8\w\r\f\0\a\u\c\o\s\n\9\5\x\c\q\1\b\y\a\y\v\q\v\4\j\d\i\q\r\0\m\8\z\z\3\d\j\d\k\c\w\s\p\s\6\i\a\u\l\e\c\b\2\g\j\2\q\0\p\5\q\u\c\t\x\d\p\m\1\n\8\g\q\x\v\8\t\f\h\2\c\r\s\7\8\1\6\6\3\n\7\0\f\9\g\1\b\8\t\2\o\1\d\b\0\x\r\s\n\x\o\b\2\e\9\3\t\m\7\i\e\g\4\g\8\3\v\9\q\q\j\k\o\o\a\h\a\b\o\x\a\v\s\7\1\h\w\q\z\3\j\2\p\u\n\z\v\5\y\0\b\u\n\w\v\1\t\q\i\7\t\7\x\z\q\8\0\u\3\f\l\7\f\k\z\2\u\l\f\a\x\4\t\v\c\5\l\y\d\4\r\f\z\m\g\c\m\f\q\3\2\3\m\k\7\t\5\4\q\n\m\l\r\k\v\x\6\a\g\6\r\t\1\g\z\e\7\4\e\d\a\m\5\u\o\q\u\8\8\s\6\9\f\n\5\1\3\e\2\m\8\r\5\2\s\n\9\t\r\6\0\z\g\n\g\9\x\1\3\c\4\6\e\q\j\4\p\o\q\l\f\1\y\i\4\y\r\z\z\9\o\z\e\o\r\o\w\r\q\t\x\a\a\o\f\o\p\4\y\r\k\i\r\m\u\0\b\m\u\f\g\9\f\c\d\j\9\u\v\c\u\y\i\u\y\3\5\x\e\h\3\g\6\c\h\m\a\n\u\g\c\p\d\6\d\f\8\w\z\3\0\9\l\p\e\9\7\z\0\q\m\k\f\l\w\u\5\c\m\7\6\m\z\h\i\i\i\8\4\l\t\j\w\y\o\d\7\5\u\l\t\b\y\e\q\y\r\j\m\1\9\y\h\9\f\m\n\0\z\a\s\h\y\3\e\g\3\c\1\d\o\c\t\y\z\3\a\0\c\8\w\w\c\n\s\k\6\c\n\3\b\l\0\a\z\7\2\y\i\d\n\0\n\n\4\e\k\9\j\w\3\9\n\j\h\s\r\l\2\2\w\n\7\x\a\t\u\o\l\q\r\b\1\l\6\b\1\u\7\e\b\8\3\5\i\7\q\m\i\k\2\h\2\c\m\a\6\b\g\q\h\y\l\n\o\t\b\w\c\g\t\g\9\r\b\x\r\0\k\1\h\x\g\q\w\5\c\2\q\0\j\n\n\x\0\2\2\v\d\1\0\w\j\w\c\z\c\p\x\0\9\j\z\m\i\b\z\9\8\7\h\t\z\x\p\7\v\6\w\b\o\2\0\i\a\j\5\7\d\d\u\e\m\h\d\g\t\j\u\2\x\w\l\f\b\i\3\0\o\w\o\0\g\e\1\e\5\2\q\h\o\z\v\0\1\w\m\8\h\8\l\w\w\c\a\q\z\6\3\i\o\d\a\c\9\l\b\9\c\i\3\a\6\o\o\m\7\p\w\2\j\9\5\k\o\l\5\u\c\p\v\m\l\4\m\3\4\c\p\7\1\p\g\7\5\0\n\9\1\3\y\l\n\x\k\h\3\v\s\p\c\e\3\1\4\2\b\9\3\c\u\s\9\s\q\f\u\8\m\k\5\1\a\9\h\j\w\e\k\0\i\k\x\5\7\i\t\v\u\i\5\h\7\5\o\j\p\5\t\6\q\y\0\s\z\4\0\b\2\p\6\m\k\h\b\i\n\t\a\a\i\9\2\2\t\t\k\w\o\s\m\6\r\0\9\1\i\7\4\6\h\7\7\8\c\i\9\t\v\l\a\h\g\g\t\w\4\d\q\l\w\1\y\w\6\2\c\r\q\v\q\h\i\o\n\s\d\6\a\e\5\4\3\u\9\h\m\u\d\m\v\0\9\h\5\l\p\4\n\y\h\b\d\8\q\u\1\d\o\h\o\k\w\j\j\8\z\r\n\w\x\b\d\t\1\z\0\6\n\g\1\k\6\9\u\8\6\g\q\h\4\4\e\c\l\9\p\l\y\b\q\u\1\t\f\t\y\k\h\y\d\4\y\p\f\h\5\c\h\j\4\r\0\c\l\y\1\5\n\4\p\6\5\e\f\t\d\c\b\l\v\4\z\2\g\t\i\q\u\1\5\9\z\s\o\f\e\6\d\a\l\u\q\m\h\z\y\4\w\s\l\q\f\g\9\t\8\7\p\t\z\4\1\r\q\d\k\x\v\6\w\g\x\u\4\k\o\v\u\n\n\p\3\2\c\k\n\b\w\q\h\a\2\r\g\q\i\k\4\x\b\g\w\t\o\f\s\a\5\b\3\s\s\c\r\v\1\0\q\b\8\a\h\i\4\r\q\c\7\4\6\q\e\s\6\l\8\k\6\a\2\d\e\c\b\s\6\0\w\8\y\p\r\k\z\1\s\9\b\t\o\e\z\d\u\y\d\1\j\k\p\j\u\s\0\f\a\h\a\d\t\3\r\g\y\2\c\o\j\d\s\t\h\q\l\h\u\8\l\3\6\j\9\i\1\b\2\v\6\b\4\c\x\6\j\r\g\i\v\p\g\2\1\v\x\v\z\e\u\6\7\w\n\i\4\f\o\0\4\e\2\w\7\o\e\g\7\2\6\1\4\s\n\3\5\q\e\p\7\1\z\e\7\k\r\3\a\v\6\h\a\k\b\8\3\6\s\a\h\r\7\o\s\c\g\p\t\v\y\w\t\4\l\s\a\v\q\w\u\z\n\a\b\p\6\n\d\1\2\7\7\7\x\l\5\3\k\w\v\x\m\h\n\8\n\b\w\0\d\o\l\f\9\k\g\7\7\q\7\e\u\f\t\h\5\u\a\7\a\x\h\f\r\h\w\y\z\6\c\q\k\z\a\c\4\e\0\e\a\7\v\z\w\b\b\g\h\a\v\6\4\s\v\8\k\s\q\3\w\s\8\x\h\0\x\r\g\w\i\7\v\z\e\o\a\p\1\g\x\p\t\z\h\9\1\8\r\c\u\4\m\u\6\q\i\h\b\4\n\z\m\c\i\i\n\2\1\h\3\w\y\1\p\m\y\u\o\l\0\m\h\f\p\g\f\u\u\u\p\i\3\m\t\2\8\z\2\u\y\x\o\k\z\b\g\n\8\1\m\l\b\c\s\0\c\5\3\i\c\9\2\c\6\e\j\t\u\q\v\s\g\g\9\b\b\j\h\v\n\5\w\n\q\3\8\l\3\1\8\r\o\k\g\j\n\p\v\p\e\6\m\j\5\1\q\x\n\a\3\p\j\8\g\3\v\8\s\c\q\4\3\c\z\v\4\f\g\7\k\b\d\f\3\4\w\7\j\8\8\e\9\i\k\y\y\e\0\9\r\6\5\i\o\8\r\7\g\v\7\g\s\b\k\y\6\2\a\v\a\w\b\2\i\i\r\k\5\y\2\w\t\d\g\p\7\j\1\9\9\h\k\9\4\0\p\o\4\c\4\f\f\2\3\w\1\d\s\b\x\8\x\8\0\k\1\6\o\f\o\x\3\l\6\u\w\z\w\p\y\j\c\7\d\2\s\4\o\k\g\o\d\o\8\p\y\9\3\7\3\p\h\4\f\g\p\5\2\r\b\p\p\o\b\z\4\0\g\f\4\6\6\3\1\t\h\g\w\0\x\a\h\2\l\b\h\u\u\7\1\o\q\e\u\d\k\6\q\3\y\4\x\6\a\i\5\s\d\y\e\n\c\k\m\q\w\5\6\n\i\o\o\i\0\a\5\0\y\b\7\3\y\j\b\w\6\w\l\w\d\n\5\v\d\j\r\n\c\o\s\u\x\d\w\t\v\y\7\8\k\b\o\n\j\3\u\0\4\l\7\w\b\g\c\h\r\e\s\z\e\b\d\0\i\c\s\t\e\x\6\e\e\j\4\h\p\q\z\v\v\5\x\f\c\4\q\8\u\q\6\0\h\4\k\o\d\s\n\d\e\7\2\c\i\8\2\o\v\v\t\0\e\h\z\m\u\a\n\b\n\k\f\c\8\p\5\q\1\b\o\5\6\z\y\p\s\e\u\i\6\j\n\k\1\l\k\w\j\o\p\q\k\1\u\4\l\j\s\a\s\b\v\b\5\i\g\k\c\u\g\x\y\h\e\n\u\z\n\l\p\7\j\z\n\p\3\0\a\d\r\o\z\t\x\b\b\7\6\6\v\m\z\e\1\p\4\u\s\3\4\5\m\3\r\q\k\x\8\o\x\3\e\o\7\v\q\1\s\g\v\a\6\k\6\d\j\8\e\z\r\y\f\u\3\b\8\r\o\k\h\t\f\9\r\x\u\u\p\6\a\h\h\7\z\h\6\l\d\w\8\j\j\x\e\z\e\x\u\e\c\q\0\1\k\u\q\0\g\b\6\l\w\y\b\y\m\p\d\f\c\m\x\q\8\v\j\7\p\n\a\t\v\0\a\s\c\0\f\q\t\t\a\c\w\e\r\9\i\q\e\z\4\0\m\x\d\m\r\u\q\f\j\0\n\8\y\r\q\s\8\v\z\y\8\x\k\3\3\o\m\l\8\s\3\l\5\f\q\q\d\t\p\z\t\g\9\y\z\e\y\x\a\3\o\t\z\p\t\4\e\d\c\a\8\5\z\4\w\m\u\e\t\g\n\f\6\a\r\3\i\6\5\h\1\a\4\1\3\b\x\z\i\e\5\7\y\b\d\u\1\z\5\m\8\i\0\g\5\g\4\p\g\c\e\p\8\l\7\9\l\3\6\7\7\9\n\u\y\n\k\d\f\g\6\i\7\s\0\m\5\3\a\9\o\7\b\3\z\j\a\n\h\2\k\j\f\e\c\6\v\a\k\7\z\p\k\l\l\i\4\7\i\0\l\u\g\u\e\p\b\f\m\b\n\1\m\8\w\f\o\a\b\u\o\0\1\v\d\4\4\k\a\l\0\7\v\n\u\g\a\m\6\6\p\l\x\h\x\y\x\6\2\v\b\w\5\1\h\9\7\o\i\p\y\6\w\h\0\m\o\8\c\a\m\k\v\6\h\8\e\4\y\8\a\a\c\g\j\c\3\t\b\5\i\n\x\a\n\b\3\f\o\1\i\z\r\d\o\e\4\0\9\h\u\u\m\6\b\p\a\9\h\3\i\e\f\s\l\5\n\b\1\b\h\9\l\t\b\e\d\1\9\j\n\v\i\x\0\h\v\a\x\n\n\g\h\e\r\h\1\k\k\d\d\8\j\2\5\2\7\q\w\c\l\o\s\p\y\u\2\9\b\g\b\5\h\z\w\1\v\h\r\n\m\w\c\4\5\b\v\2\q\y\5\q\9\r\4\g\l\1\5\8\w\r\2\6\n\f\u\l\3\9\j\e\8\o\s\9\d\4\7\5\i\9\k\h\z\v\a\1\i\l\i\6\4\5\b\g\3\2\u\0\z\s\b\t\0\2\6\2\m\0\p\n\w\x\9\9\w\e\8\1\z\9\c\v\v\u\u\s\1\z\m\c\q\9\y\d\1\f\c\8\p\6\v\q\7\7\2\7\m\j\8\x\1\p\9\t\e\q\i\0\f\f\m\2\2\d\m\a\h\3\f\7\i\g\x\w\9\d\2\g\w\i\h\0\s\v\y\1\f\l\o\k\6\c\h\a\1\5\v\j\n\w\f\q\v\d\s\x\x\y\h\p\e\u\a\d\9\7\u\a\b\c\g\q\5\p\5\l\j\i\0\k\8\g\i\a\i\a\2\p\k\p\l\o\p\6\k\4\i\x\i\t\y\r\i\t\w\7\z\2\a\s\m\c\s\j\u\8\m\a\3\u\t\6\p\2\c\j\f\e\8\l\a\f\0\g\j\x\d\z\a\p\g\d\c\s\5\b\g\2\h\h\j\v\5\3\t\9\k\0\y\x\c\2\p\l\v\n\x\p\k\z\1\x\0\3\9\x\t\0\m\h\3\4\l\h\u\i\y\m\z\y\2\m\t\3\2\6\u\f\a\u\b\6\d\t\4\x\j\e\c\k\1\a\o\y\i\o\c\w\i\m\c\f\g\x\2\n\8\y\q\3\0\b\z\r\1\p\4\m\6\s\q\r\a\g\o\4\f\v\d\w\3\v\1\p\k\x\t\0\w\k\t\9\w\a\e\a\e\s\f\a\m\s\8\k\n\s\h\z\m\s\u\t\l\t\1\3\r\h\k\5\w\o\y\9\l\y\l\9\t\0\u\2\z\9\e\l\t\j\x\8\c\e\t\g\h\y\u\j\u\v\8\i\4\v\w\j\8\i\h\1\9\k\v\8\b\a\m\k\s\i\z\u\5\8\t\e\m\z\4\q\s\c\7\f\c\0\p\u\g\j\k\s\q\a\f\d\1\i\y\7\x\v\r\h\f\p\a\s\d\1\5\8\a\o\s\t\8\q\9\f\c\c\t\c\j\6\i\z\f\u\g\2\6\k\g\h\w\s\h\f\i\b\j\u\k\2\w\k\s\y\c\s\f\5\t\p\6\9\i\6\x\k\x\6\6\s\p\r\w\w\e\2\9\l\n\1\c\8\o\t\q\c\6\e\0\o\0\l\f\n\r\g\f\m\8\o\l\7\b\y\n\r\v\h\s\y\h\1\x\j\m\n\4\x\a\o\r\e\e\n\q\r\p\n\x\g\g\y\1\g\a\d\2\m\d\g\i\g\i\4\u\z\v\9\9\o\s\s\0\9\k\a\w\z\f\l\h\o\b\1\n\x\2\v\1\s\4\v\h\4\l\a\9\7\k\a\f\o\s\c\m\j\d\6\a\j\3\7\e\p\g\0\h\q\h\3\t\w\x\1\8\l\l\8\3\z\n\o\1\a\k\6\h\0\u\p\5\p\b\5\v\y\5\4\n\q\1\w\3\1\v\a\6\4\x\i\p\9\p\l\u\d\p\e\2\u\5\g\k\4\l\r\i\1\o\z\6\u\m\q\z\j\a\0\g\s\8\s\f\u\r\y\1\b\a\7\q\m\t\q\8\r\o\m\n\1\6\3\u\1\r\b\a\9\d\j\y\p\e\j\h\8\s\z\t\c\i\n\u\f\5\h\u\x\i\q ]] 00:25:31.758 00:25:31.758 real 0m3.163s 00:25:31.758 user 0m2.555s 00:25:31.758 sys 0m0.423s 00:25:31.758 21:47:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:31.758 21:47:52 -- common/autotest_common.sh@10 -- # set +x 00:25:31.758 ************************************ 00:25:31.758 END TEST dd_rw_offset 00:25:31.758 ************************************ 00:25:32.017 21:47:52 -- dd/basic_rw.sh@1 -- # cleanup 00:25:32.017 21:47:52 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:25:32.017 21:47:52 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:25:32.017 21:47:52 -- dd/common.sh@11 -- # local nvme_ref= 00:25:32.017 21:47:52 -- dd/common.sh@12 -- # local size=0xffff 00:25:32.017 21:47:52 -- dd/common.sh@14 -- # local bs=1048576 00:25:32.017 21:47:52 -- dd/common.sh@15 -- # local count=1 00:25:32.017 21:47:52 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:25:32.017 21:47:52 -- dd/common.sh@18 -- # gen_conf 00:25:32.017 21:47:52 -- dd/common.sh@31 -- # xtrace_disable 00:25:32.017 21:47:52 -- common/autotest_common.sh@10 -- # set +x 00:25:32.017 { 00:25:32.017 "subsystems": [ 00:25:32.017 { 00:25:32.017 "subsystem": "bdev", 00:25:32.017 "config": [ 00:25:32.017 { 00:25:32.017 "params": { 00:25:32.017 "trtype": "pcie", 00:25:32.017 "traddr": "0000:00:06.0", 00:25:32.017 "name": "Nvme0" 00:25:32.017 }, 00:25:32.017 "method": "bdev_nvme_attach_controller" 00:25:32.017 }, 00:25:32.017 { 00:25:32.017 "method": "bdev_wait_for_examine" 00:25:32.017 } 00:25:32.017 ] 00:25:32.017 } 00:25:32.017 ] 00:25:32.017 } 00:25:32.017 [2024-12-06 21:47:52.330700] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:32.017 [2024-12-06 21:47:52.330852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88523 ] 00:25:32.017 [2024-12-06 21:47:52.498240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.274 [2024-12-06 21:47:52.655607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.532  [2024-12-06T21:47:54.004Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:25:33.507 00:25:33.507 21:47:53 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:33.507 00:25:33.507 real 0m36.919s 00:25:33.507 user 0m29.936s 00:25:33.507 sys 0m4.846s 00:25:33.507 21:47:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:33.507 ************************************ 00:25:33.507 END TEST spdk_dd_basic_rw 00:25:33.507 ************************************ 00:25:33.507 21:47:53 -- common/autotest_common.sh@10 -- # set +x 00:25:33.507 21:47:53 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:33.507 21:47:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.507 21:47:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.507 21:47:53 -- common/autotest_common.sh@10 -- # set +x 00:25:33.507 ************************************ 00:25:33.507 START TEST spdk_dd_posix 00:25:33.507 ************************************ 00:25:33.507 21:47:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:25:33.507 * Looking for test storage... 00:25:33.507 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:25:33.507 21:47:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:33.507 21:47:54 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:33.507 21:47:54 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:33.766 21:47:54 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:33.766 21:47:54 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:33.766 21:47:54 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:33.766 21:47:54 -- scripts/common.sh@335 -- # IFS=.-: 00:25:33.766 21:47:54 -- scripts/common.sh@335 -- # read -ra ver1 00:25:33.766 21:47:54 -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.766 21:47:54 -- scripts/common.sh@336 -- # read -ra ver2 00:25:33.766 21:47:54 -- scripts/common.sh@337 -- # local 'op=<' 00:25:33.766 21:47:54 -- scripts/common.sh@339 -- # ver1_l=2 00:25:33.766 21:47:54 -- scripts/common.sh@340 -- # ver2_l=1 00:25:33.766 21:47:54 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:33.766 21:47:54 -- scripts/common.sh@343 -- # case "$op" in 00:25:33.766 21:47:54 -- scripts/common.sh@344 -- # : 1 00:25:33.766 21:47:54 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:33.766 21:47:54 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.766 21:47:54 -- scripts/common.sh@364 -- # decimal 1 00:25:33.766 21:47:54 -- scripts/common.sh@352 -- # local d=1 00:25:33.766 21:47:54 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.766 21:47:54 -- scripts/common.sh@354 -- # echo 1 00:25:33.766 21:47:54 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:33.766 21:47:54 -- scripts/common.sh@365 -- # decimal 2 00:25:33.766 21:47:54 -- scripts/common.sh@352 -- # local d=2 00:25:33.766 21:47:54 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.766 21:47:54 -- scripts/common.sh@354 -- # echo 2 00:25:33.766 21:47:54 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:33.766 21:47:54 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:33.766 21:47:54 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:33.766 21:47:54 -- scripts/common.sh@367 -- # return 0 00:25:33.766 21:47:54 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:33.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.766 --rc genhtml_branch_coverage=1 00:25:33.766 --rc genhtml_function_coverage=1 00:25:33.766 --rc genhtml_legend=1 00:25:33.766 --rc geninfo_all_blocks=1 00:25:33.766 --rc geninfo_unexecuted_blocks=1 00:25:33.766 00:25:33.766 ' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:33.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.766 --rc genhtml_branch_coverage=1 00:25:33.766 --rc genhtml_function_coverage=1 00:25:33.766 --rc genhtml_legend=1 00:25:33.766 --rc geninfo_all_blocks=1 00:25:33.766 --rc geninfo_unexecuted_blocks=1 00:25:33.766 00:25:33.766 ' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:33.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.766 --rc genhtml_branch_coverage=1 00:25:33.766 --rc genhtml_function_coverage=1 00:25:33.766 --rc genhtml_legend=1 00:25:33.766 --rc geninfo_all_blocks=1 00:25:33.766 --rc geninfo_unexecuted_blocks=1 00:25:33.766 00:25:33.766 ' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:33.766 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.766 --rc genhtml_branch_coverage=1 00:25:33.766 --rc genhtml_function_coverage=1 00:25:33.766 --rc genhtml_legend=1 00:25:33.766 --rc geninfo_all_blocks=1 00:25:33.766 --rc geninfo_unexecuted_blocks=1 00:25:33.766 00:25:33.766 ' 00:25:33.766 21:47:54 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:33.766 21:47:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.766 21:47:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.766 21:47:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.766 21:47:54 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:33.766 21:47:54 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:33.766 21:47:54 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:33.766 21:47:54 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:33.766 21:47:54 -- paths/export.sh@6 -- # export PATH 00:25:33.766 21:47:54 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:25:33.766 21:47:54 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:25:33.766 21:47:54 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:25:33.766 21:47:54 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:25:33.766 21:47:54 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:25:33.766 21:47:54 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:33.766 21:47:54 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:33.766 21:47:54 -- dd/posix.sh@130 -- # tests 00:25:33.766 21:47:54 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:25:33.766 * First test run, liburing in use 00:25:33.766 21:47:54 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:25:33.766 21:47:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:33.766 21:47:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:33.766 21:47:54 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 ************************************ 00:25:33.766 START TEST dd_flag_append 00:25:33.766 ************************************ 00:25:33.766 21:47:54 -- common/autotest_common.sh@1114 -- # append 00:25:33.766 21:47:54 -- dd/posix.sh@16 -- # local dump0 00:25:33.766 21:47:54 -- dd/posix.sh@17 -- # local dump1 00:25:33.766 21:47:54 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:33.766 21:47:54 -- dd/common.sh@98 -- # xtrace_disable 00:25:33.766 21:47:54 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 21:47:54 -- dd/posix.sh@19 -- # dump0=5z5alzuv91oq7i4x80e6ee17fywvydwc 00:25:33.766 21:47:54 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:33.766 21:47:54 -- dd/common.sh@98 -- # xtrace_disable 00:25:33.766 21:47:54 -- common/autotest_common.sh@10 -- # set +x 00:25:33.766 21:47:54 -- dd/posix.sh@20 -- # dump1=z36w7ik8q4y4762exyrge6vxyy99egd1 00:25:33.766 21:47:54 -- dd/posix.sh@22 -- # printf %s 5z5alzuv91oq7i4x80e6ee17fywvydwc 00:25:33.766 21:47:54 -- dd/posix.sh@23 -- # printf %s z36w7ik8q4y4762exyrge6vxyy99egd1 00:25:33.766 21:47:54 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:33.766 [2024-12-06 21:47:54.158020] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:33.766 [2024-12-06 21:47:54.158169] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88605 ] 00:25:34.024 [2024-12-06 21:47:54.307497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.024 [2024-12-06 21:47:54.457252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.282  [2024-12-06T21:47:55.713Z] Copying: 32/32 [B] (average 31 kBps) 00:25:35.216 00:25:35.216 21:47:55 -- dd/posix.sh@27 -- # [[ z36w7ik8q4y4762exyrge6vxyy99egd15z5alzuv91oq7i4x80e6ee17fywvydwc == \z\3\6\w\7\i\k\8\q\4\y\4\7\6\2\e\x\y\r\g\e\6\v\x\y\y\9\9\e\g\d\1\5\z\5\a\l\z\u\v\9\1\o\q\7\i\4\x\8\0\e\6\e\e\1\7\f\y\w\v\y\d\w\c ]] 00:25:35.216 00:25:35.216 real 0m1.488s 00:25:35.216 user 0m1.183s 00:25:35.216 sys 0m0.189s 00:25:35.216 21:47:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:35.216 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:25:35.216 ************************************ 00:25:35.216 END TEST dd_flag_append 00:25:35.216 ************************************ 00:25:35.216 21:47:55 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:25:35.216 21:47:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:35.216 21:47:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:35.216 21:47:55 -- common/autotest_common.sh@10 -- # set +x 00:25:35.216 ************************************ 00:25:35.216 START TEST dd_flag_directory 00:25:35.216 ************************************ 00:25:35.216 21:47:55 -- common/autotest_common.sh@1114 -- # directory 00:25:35.216 21:47:55 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.216 21:47:55 -- common/autotest_common.sh@650 -- # local es=0 00:25:35.216 21:47:55 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.216 21:47:55 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.216 21:47:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.216 21:47:55 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.216 21:47:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.216 21:47:55 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.216 21:47:55 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:35.216 21:47:55 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:35.216 21:47:55 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:35.216 21:47:55 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:35.216 [2024-12-06 21:47:55.708186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:35.216 [2024-12-06 21:47:55.708356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88638 ] 00:25:35.474 [2024-12-06 21:47:55.878240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.732 [2024-12-06 21:47:56.030263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.991 [2024-12-06 21:47:56.258099] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:35.991 [2024-12-06 21:47:56.258192] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:35.991 [2024-12-06 21:47:56.258220] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:36.561 [2024-12-06 21:47:56.809960] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:36.821 21:47:57 -- common/autotest_common.sh@653 -- # es=236 00:25:36.821 21:47:57 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:36.821 21:47:57 -- common/autotest_common.sh@662 -- # es=108 00:25:36.821 21:47:57 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:36.821 21:47:57 -- common/autotest_common.sh@670 -- # es=1 00:25:36.821 21:47:57 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:36.821 21:47:57 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.821 21:47:57 -- common/autotest_common.sh@650 -- # local es=0 00:25:36.821 21:47:57 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.821 21:47:57 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.821 21:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.821 21:47:57 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.821 21:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.821 21:47:57 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.821 21:47:57 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:36.821 21:47:57 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:36.821 21:47:57 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:36.821 21:47:57 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:25:36.821 [2024-12-06 21:47:57.213566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:36.821 [2024-12-06 21:47:57.213720] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88659 ] 00:25:37.080 [2024-12-06 21:47:57.383073] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.080 [2024-12-06 21:47:57.537689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.339 [2024-12-06 21:47:57.757327] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:37.339 [2024-12-06 21:47:57.757403] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:25:37.339 [2024-12-06 21:47:57.757422] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:37.908 [2024-12-06 21:47:58.307729] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:38.168 21:47:58 -- common/autotest_common.sh@653 -- # es=236 00:25:38.168 21:47:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:38.168 21:47:58 -- common/autotest_common.sh@662 -- # es=108 00:25:38.168 21:47:58 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:38.168 21:47:58 -- common/autotest_common.sh@670 -- # es=1 00:25:38.168 21:47:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:38.168 00:25:38.168 real 0m3.009s 00:25:38.168 user 0m2.400s 00:25:38.168 sys 0m0.407s 00:25:38.168 21:47:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:38.168 ************************************ 00:25:38.168 END TEST dd_flag_directory 00:25:38.168 ************************************ 00:25:38.168 21:47:58 -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 21:47:58 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:25:38.428 21:47:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:38.428 21:47:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:38.428 21:47:58 -- common/autotest_common.sh@10 -- # set +x 00:25:38.428 ************************************ 00:25:38.428 START TEST dd_flag_nofollow 00:25:38.428 ************************************ 00:25:38.428 21:47:58 -- common/autotest_common.sh@1114 -- # nofollow 00:25:38.428 21:47:58 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:38.428 21:47:58 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:38.428 21:47:58 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:25:38.428 21:47:58 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:25:38.428 21:47:58 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.428 21:47:58 -- common/autotest_common.sh@650 -- # local es=0 00:25:38.428 21:47:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.428 21:47:58 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.428 21:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.428 21:47:58 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.428 21:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.428 21:47:58 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.428 21:47:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:38.428 21:47:58 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:38.428 21:47:58 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:38.428 21:47:58 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:38.428 [2024-12-06 21:47:58.772050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:38.428 [2024-12-06 21:47:58.772220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88699 ] 00:25:38.688 [2024-12-06 21:47:58.942020] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.688 [2024-12-06 21:47:59.091780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:38.969 [2024-12-06 21:47:59.316130] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:38.969 [2024-12-06 21:47:59.316211] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:25:38.969 [2024-12-06 21:47:59.316232] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:39.578 [2024-12-06 21:47:59.873768] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:39.837 21:48:00 -- common/autotest_common.sh@653 -- # es=216 00:25:39.837 21:48:00 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:39.837 21:48:00 -- common/autotest_common.sh@662 -- # es=88 00:25:39.837 21:48:00 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:39.837 21:48:00 -- common/autotest_common.sh@670 -- # es=1 00:25:39.837 21:48:00 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:39.837 21:48:00 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:39.837 21:48:00 -- common/autotest_common.sh@650 -- # local es=0 00:25:39.837 21:48:00 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:39.837 21:48:00 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.837 21:48:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.837 21:48:00 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.837 21:48:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.837 21:48:00 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.837 21:48:00 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:39.837 21:48:00 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:39.837 21:48:00 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:25:39.837 21:48:00 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:25:39.837 [2024-12-06 21:48:00.283691] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:39.837 [2024-12-06 21:48:00.283854] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88722 ] 00:25:40.096 [2024-12-06 21:48:00.435411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.356 [2024-12-06 21:48:00.596227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.356 [2024-12-06 21:48:00.817936] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:40.356 [2024-12-06 21:48:00.818033] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:25:40.356 [2024-12-06 21:48:00.818067] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:40.923 [2024-12-06 21:48:01.399503] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:25:41.489 21:48:01 -- common/autotest_common.sh@653 -- # es=216 00:25:41.489 21:48:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:41.489 21:48:01 -- common/autotest_common.sh@662 -- # es=88 00:25:41.489 21:48:01 -- common/autotest_common.sh@663 -- # case "$es" in 00:25:41.489 21:48:01 -- common/autotest_common.sh@670 -- # es=1 00:25:41.489 21:48:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:41.489 21:48:01 -- dd/posix.sh@46 -- # gen_bytes 512 00:25:41.489 21:48:01 -- dd/common.sh@98 -- # xtrace_disable 00:25:41.489 21:48:01 -- common/autotest_common.sh@10 -- # set +x 00:25:41.489 21:48:01 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:41.489 [2024-12-06 21:48:01.823853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.489 [2024-12-06 21:48:01.824017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88736 ] 00:25:41.749 [2024-12-06 21:48:01.992913] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.749 [2024-12-06 21:48:02.157183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.008  [2024-12-06T21:48:03.442Z] Copying: 512/512 [B] (average 500 kBps) 00:25:42.945 00:25:42.945 21:48:03 -- dd/posix.sh@49 -- # [[ u6uw1ocl56uwwptti8sj8l2xet7fn46su5ivi9trnkdoxf9d5ew1l3t2gqgdmx3pll1ogfuwjv5ivzxdovazdiht4e4d33lf5hax0d8k0qbxfxvt51b3mzp6ndvnxg4cn1k8nm6v5kct9d90bqor487latghkh7bhsgv0kmhn14fho6ccvxzisld02q5kzn85hjd34rjo3w2x674la09p2qgednk1psxbehyvfp437cggja9os7kghmsblmi20yk92m9w8l3wur3pn69p9wc33cnuaxiyb8e3ur0i0yw7lrpjzsogq18mjs23xg6fj5ye74iu2sb3ir02nw1cjaxmh6v19kcio6tkhuvo0cbpmou44xet8yl2ip7llnqg420bn1wu1r8fze2022kkyclgi8f5euwox6ereb06xso8ja4567hvh9odceyqvtptlk00k005u4hhrin4g3paypxu3iobtnoxde1td0cha70bnv3e2upn11vbm8g91p597xp == \u\6\u\w\1\o\c\l\5\6\u\w\w\p\t\t\i\8\s\j\8\l\2\x\e\t\7\f\n\4\6\s\u\5\i\v\i\9\t\r\n\k\d\o\x\f\9\d\5\e\w\1\l\3\t\2\g\q\g\d\m\x\3\p\l\l\1\o\g\f\u\w\j\v\5\i\v\z\x\d\o\v\a\z\d\i\h\t\4\e\4\d\3\3\l\f\5\h\a\x\0\d\8\k\0\q\b\x\f\x\v\t\5\1\b\3\m\z\p\6\n\d\v\n\x\g\4\c\n\1\k\8\n\m\6\v\5\k\c\t\9\d\9\0\b\q\o\r\4\8\7\l\a\t\g\h\k\h\7\b\h\s\g\v\0\k\m\h\n\1\4\f\h\o\6\c\c\v\x\z\i\s\l\d\0\2\q\5\k\z\n\8\5\h\j\d\3\4\r\j\o\3\w\2\x\6\7\4\l\a\0\9\p\2\q\g\e\d\n\k\1\p\s\x\b\e\h\y\v\f\p\4\3\7\c\g\g\j\a\9\o\s\7\k\g\h\m\s\b\l\m\i\2\0\y\k\9\2\m\9\w\8\l\3\w\u\r\3\p\n\6\9\p\9\w\c\3\3\c\n\u\a\x\i\y\b\8\e\3\u\r\0\i\0\y\w\7\l\r\p\j\z\s\o\g\q\1\8\m\j\s\2\3\x\g\6\f\j\5\y\e\7\4\i\u\2\s\b\3\i\r\0\2\n\w\1\c\j\a\x\m\h\6\v\1\9\k\c\i\o\6\t\k\h\u\v\o\0\c\b\p\m\o\u\4\4\x\e\t\8\y\l\2\i\p\7\l\l\n\q\g\4\2\0\b\n\1\w\u\1\r\8\f\z\e\2\0\2\2\k\k\y\c\l\g\i\8\f\5\e\u\w\o\x\6\e\r\e\b\0\6\x\s\o\8\j\a\4\5\6\7\h\v\h\9\o\d\c\e\y\q\v\t\p\t\l\k\0\0\k\0\0\5\u\4\h\h\r\i\n\4\g\3\p\a\y\p\x\u\3\i\o\b\t\n\o\x\d\e\1\t\d\0\c\h\a\7\0\b\n\v\3\e\2\u\p\n\1\1\v\b\m\8\g\9\1\p\5\9\7\x\p ]] 00:25:42.945 00:25:42.945 real 0m4.662s 00:25:42.945 user 0m3.761s 00:25:42.945 sys 0m0.584s 00:25:42.945 21:48:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:42.945 21:48:03 -- common/autotest_common.sh@10 -- # set +x 00:25:42.945 ************************************ 00:25:42.945 END TEST dd_flag_nofollow 00:25:42.945 ************************************ 00:25:42.945 21:48:03 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:25:42.945 21:48:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:42.945 21:48:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:42.945 21:48:03 -- common/autotest_common.sh@10 -- # set +x 00:25:42.945 ************************************ 00:25:42.945 START TEST dd_flag_noatime 00:25:42.945 ************************************ 00:25:42.945 21:48:03 -- common/autotest_common.sh@1114 -- # noatime 00:25:42.945 21:48:03 -- dd/posix.sh@53 -- # local atime_if 00:25:42.945 21:48:03 -- dd/posix.sh@54 -- # local atime_of 00:25:42.945 21:48:03 -- dd/posix.sh@58 -- # gen_bytes 512 00:25:42.945 21:48:03 -- dd/common.sh@98 -- # xtrace_disable 00:25:42.945 21:48:03 -- common/autotest_common.sh@10 -- # set +x 00:25:42.945 21:48:03 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:42.945 21:48:03 -- dd/posix.sh@60 -- # atime_if=1733521682 00:25:42.945 21:48:03 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:42.945 21:48:03 -- dd/posix.sh@61 -- # atime_of=1733521683 00:25:42.945 21:48:03 -- dd/posix.sh@66 -- # sleep 1 00:25:44.324 21:48:04 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:44.324 [2024-12-06 21:48:04.502916] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:44.324 [2024-12-06 21:48:04.503081] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88789 ] 00:25:44.324 [2024-12-06 21:48:04.673937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.584 [2024-12-06 21:48:04.833845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.584  [2024-12-06T21:48:06.023Z] Copying: 512/512 [B] (average 500 kBps) 00:25:45.526 00:25:45.526 21:48:05 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:45.526 21:48:05 -- dd/posix.sh@69 -- # (( atime_if == 1733521682 )) 00:25:45.526 21:48:05 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.526 21:48:05 -- dd/posix.sh@70 -- # (( atime_of == 1733521683 )) 00:25:45.526 21:48:05 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:25:45.786 [2024-12-06 21:48:06.023289] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:45.786 [2024-12-06 21:48:06.023470] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88808 ] 00:25:45.786 [2024-12-06 21:48:06.190128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.046 [2024-12-06 21:48:06.339692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.305  [2024-12-06T21:48:07.738Z] Copying: 512/512 [B] (average 500 kBps) 00:25:47.241 00:25:47.241 21:48:07 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:25:47.241 21:48:07 -- dd/posix.sh@73 -- # (( atime_if < 1733521686 )) 00:25:47.241 00:25:47.241 real 0m4.061s 00:25:47.241 user 0m2.453s 00:25:47.241 sys 0m0.385s 00:25:47.241 21:48:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:47.241 ************************************ 00:25:47.241 END TEST dd_flag_noatime 00:25:47.241 ************************************ 00:25:47.241 21:48:07 -- common/autotest_common.sh@10 -- # set +x 00:25:47.241 21:48:07 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:25:47.241 21:48:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:47.241 21:48:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:47.241 21:48:07 -- common/autotest_common.sh@10 -- # set +x 00:25:47.241 ************************************ 00:25:47.241 START TEST dd_flags_misc 00:25:47.241 ************************************ 00:25:47.241 21:48:07 -- common/autotest_common.sh@1114 -- # io 00:25:47.241 21:48:07 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:25:47.241 21:48:07 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:25:47.241 21:48:07 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:25:47.241 21:48:07 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:47.241 21:48:07 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:47.241 21:48:07 -- dd/common.sh@98 -- # xtrace_disable 00:25:47.241 21:48:07 -- common/autotest_common.sh@10 -- # set +x 00:25:47.241 21:48:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:47.241 21:48:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:47.241 [2024-12-06 21:48:07.580627] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:47.241 [2024-12-06 21:48:07.580762] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88851 ] 00:25:47.241 [2024-12-06 21:48:07.730903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.500 [2024-12-06 21:48:07.881593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.758  [2024-12-06T21:48:09.193Z] Copying: 512/512 [B] (average 500 kBps) 00:25:48.696 00:25:48.696 21:48:09 -- dd/posix.sh@93 -- # [[ vv53zxtuowrpwcyi6pwirj7aoli4j6llsjca93n9th7oziifuwh45fiph99lxk92ng72q9xeblyumkaq00wdequoh5wng1ki0oxv3ve44pqftal2isw5mn5gii1sdoqowf89mn7uxo8m20luper94eaxfh33j0dlmr11dig63on4g4w6id04qdq1j8olqi5fbczrme9hwhot2kqcq5ftbv9ynan1zrmlpw5bzrkfs1lyvhvxkwmnuwjl7k9n9xqhqvlqzcb7tspvexzxqu5asqabc65z9hq8e48ud4cwr82w80qnb16cffjo0h9mik8mww7jh2gmpuwvj58mwuzl8hoarbu1w5phyr5703xvpejj0r9xrmot0aqztsj5xclmyiwdej8s51jl03k9kfvx6alvl853h8iencl876ohth521n3ux506pb2by370bt88gmfr8mvvp7wsws37evpluraxojc3b82oxq5p6nbb3fzxe252uws8l7izyyykfgpq == \v\v\5\3\z\x\t\u\o\w\r\p\w\c\y\i\6\p\w\i\r\j\7\a\o\l\i\4\j\6\l\l\s\j\c\a\9\3\n\9\t\h\7\o\z\i\i\f\u\w\h\4\5\f\i\p\h\9\9\l\x\k\9\2\n\g\7\2\q\9\x\e\b\l\y\u\m\k\a\q\0\0\w\d\e\q\u\o\h\5\w\n\g\1\k\i\0\o\x\v\3\v\e\4\4\p\q\f\t\a\l\2\i\s\w\5\m\n\5\g\i\i\1\s\d\o\q\o\w\f\8\9\m\n\7\u\x\o\8\m\2\0\l\u\p\e\r\9\4\e\a\x\f\h\3\3\j\0\d\l\m\r\1\1\d\i\g\6\3\o\n\4\g\4\w\6\i\d\0\4\q\d\q\1\j\8\o\l\q\i\5\f\b\c\z\r\m\e\9\h\w\h\o\t\2\k\q\c\q\5\f\t\b\v\9\y\n\a\n\1\z\r\m\l\p\w\5\b\z\r\k\f\s\1\l\y\v\h\v\x\k\w\m\n\u\w\j\l\7\k\9\n\9\x\q\h\q\v\l\q\z\c\b\7\t\s\p\v\e\x\z\x\q\u\5\a\s\q\a\b\c\6\5\z\9\h\q\8\e\4\8\u\d\4\c\w\r\8\2\w\8\0\q\n\b\1\6\c\f\f\j\o\0\h\9\m\i\k\8\m\w\w\7\j\h\2\g\m\p\u\w\v\j\5\8\m\w\u\z\l\8\h\o\a\r\b\u\1\w\5\p\h\y\r\5\7\0\3\x\v\p\e\j\j\0\r\9\x\r\m\o\t\0\a\q\z\t\s\j\5\x\c\l\m\y\i\w\d\e\j\8\s\5\1\j\l\0\3\k\9\k\f\v\x\6\a\l\v\l\8\5\3\h\8\i\e\n\c\l\8\7\6\o\h\t\h\5\2\1\n\3\u\x\5\0\6\p\b\2\b\y\3\7\0\b\t\8\8\g\m\f\r\8\m\v\v\p\7\w\s\w\s\3\7\e\v\p\l\u\r\a\x\o\j\c\3\b\8\2\o\x\q\5\p\6\n\b\b\3\f\z\x\e\2\5\2\u\w\s\8\l\7\i\z\y\y\y\k\f\g\p\q ]] 00:25:48.696 21:48:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:48.696 21:48:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:48.696 [2024-12-06 21:48:09.080216] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:48.696 [2024-12-06 21:48:09.080621] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88865 ] 00:25:48.954 [2024-12-06 21:48:09.248484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.954 [2024-12-06 21:48:09.398408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.214  [2024-12-06T21:48:10.648Z] Copying: 512/512 [B] (average 500 kBps) 00:25:50.151 00:25:50.151 21:48:10 -- dd/posix.sh@93 -- # [[ vv53zxtuowrpwcyi6pwirj7aoli4j6llsjca93n9th7oziifuwh45fiph99lxk92ng72q9xeblyumkaq00wdequoh5wng1ki0oxv3ve44pqftal2isw5mn5gii1sdoqowf89mn7uxo8m20luper94eaxfh33j0dlmr11dig63on4g4w6id04qdq1j8olqi5fbczrme9hwhot2kqcq5ftbv9ynan1zrmlpw5bzrkfs1lyvhvxkwmnuwjl7k9n9xqhqvlqzcb7tspvexzxqu5asqabc65z9hq8e48ud4cwr82w80qnb16cffjo0h9mik8mww7jh2gmpuwvj58mwuzl8hoarbu1w5phyr5703xvpejj0r9xrmot0aqztsj5xclmyiwdej8s51jl03k9kfvx6alvl853h8iencl876ohth521n3ux506pb2by370bt88gmfr8mvvp7wsws37evpluraxojc3b82oxq5p6nbb3fzxe252uws8l7izyyykfgpq == \v\v\5\3\z\x\t\u\o\w\r\p\w\c\y\i\6\p\w\i\r\j\7\a\o\l\i\4\j\6\l\l\s\j\c\a\9\3\n\9\t\h\7\o\z\i\i\f\u\w\h\4\5\f\i\p\h\9\9\l\x\k\9\2\n\g\7\2\q\9\x\e\b\l\y\u\m\k\a\q\0\0\w\d\e\q\u\o\h\5\w\n\g\1\k\i\0\o\x\v\3\v\e\4\4\p\q\f\t\a\l\2\i\s\w\5\m\n\5\g\i\i\1\s\d\o\q\o\w\f\8\9\m\n\7\u\x\o\8\m\2\0\l\u\p\e\r\9\4\e\a\x\f\h\3\3\j\0\d\l\m\r\1\1\d\i\g\6\3\o\n\4\g\4\w\6\i\d\0\4\q\d\q\1\j\8\o\l\q\i\5\f\b\c\z\r\m\e\9\h\w\h\o\t\2\k\q\c\q\5\f\t\b\v\9\y\n\a\n\1\z\r\m\l\p\w\5\b\z\r\k\f\s\1\l\y\v\h\v\x\k\w\m\n\u\w\j\l\7\k\9\n\9\x\q\h\q\v\l\q\z\c\b\7\t\s\p\v\e\x\z\x\q\u\5\a\s\q\a\b\c\6\5\z\9\h\q\8\e\4\8\u\d\4\c\w\r\8\2\w\8\0\q\n\b\1\6\c\f\f\j\o\0\h\9\m\i\k\8\m\w\w\7\j\h\2\g\m\p\u\w\v\j\5\8\m\w\u\z\l\8\h\o\a\r\b\u\1\w\5\p\h\y\r\5\7\0\3\x\v\p\e\j\j\0\r\9\x\r\m\o\t\0\a\q\z\t\s\j\5\x\c\l\m\y\i\w\d\e\j\8\s\5\1\j\l\0\3\k\9\k\f\v\x\6\a\l\v\l\8\5\3\h\8\i\e\n\c\l\8\7\6\o\h\t\h\5\2\1\n\3\u\x\5\0\6\p\b\2\b\y\3\7\0\b\t\8\8\g\m\f\r\8\m\v\v\p\7\w\s\w\s\3\7\e\v\p\l\u\r\a\x\o\j\c\3\b\8\2\o\x\q\5\p\6\n\b\b\3\f\z\x\e\2\5\2\u\w\s\8\l\7\i\z\y\y\y\k\f\g\p\q ]] 00:25:50.151 21:48:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:50.151 21:48:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:50.151 [2024-12-06 21:48:10.595978] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:50.151 [2024-12-06 21:48:10.596359] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88885 ] 00:25:50.410 [2024-12-06 21:48:10.766064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.669 [2024-12-06 21:48:10.923472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.669  [2024-12-06T21:48:12.104Z] Copying: 512/512 [B] (average 125 kBps) 00:25:51.607 00:25:51.607 21:48:12 -- dd/posix.sh@93 -- # [[ vv53zxtuowrpwcyi6pwirj7aoli4j6llsjca93n9th7oziifuwh45fiph99lxk92ng72q9xeblyumkaq00wdequoh5wng1ki0oxv3ve44pqftal2isw5mn5gii1sdoqowf89mn7uxo8m20luper94eaxfh33j0dlmr11dig63on4g4w6id04qdq1j8olqi5fbczrme9hwhot2kqcq5ftbv9ynan1zrmlpw5bzrkfs1lyvhvxkwmnuwjl7k9n9xqhqvlqzcb7tspvexzxqu5asqabc65z9hq8e48ud4cwr82w80qnb16cffjo0h9mik8mww7jh2gmpuwvj58mwuzl8hoarbu1w5phyr5703xvpejj0r9xrmot0aqztsj5xclmyiwdej8s51jl03k9kfvx6alvl853h8iencl876ohth521n3ux506pb2by370bt88gmfr8mvvp7wsws37evpluraxojc3b82oxq5p6nbb3fzxe252uws8l7izyyykfgpq == \v\v\5\3\z\x\t\u\o\w\r\p\w\c\y\i\6\p\w\i\r\j\7\a\o\l\i\4\j\6\l\l\s\j\c\a\9\3\n\9\t\h\7\o\z\i\i\f\u\w\h\4\5\f\i\p\h\9\9\l\x\k\9\2\n\g\7\2\q\9\x\e\b\l\y\u\m\k\a\q\0\0\w\d\e\q\u\o\h\5\w\n\g\1\k\i\0\o\x\v\3\v\e\4\4\p\q\f\t\a\l\2\i\s\w\5\m\n\5\g\i\i\1\s\d\o\q\o\w\f\8\9\m\n\7\u\x\o\8\m\2\0\l\u\p\e\r\9\4\e\a\x\f\h\3\3\j\0\d\l\m\r\1\1\d\i\g\6\3\o\n\4\g\4\w\6\i\d\0\4\q\d\q\1\j\8\o\l\q\i\5\f\b\c\z\r\m\e\9\h\w\h\o\t\2\k\q\c\q\5\f\t\b\v\9\y\n\a\n\1\z\r\m\l\p\w\5\b\z\r\k\f\s\1\l\y\v\h\v\x\k\w\m\n\u\w\j\l\7\k\9\n\9\x\q\h\q\v\l\q\z\c\b\7\t\s\p\v\e\x\z\x\q\u\5\a\s\q\a\b\c\6\5\z\9\h\q\8\e\4\8\u\d\4\c\w\r\8\2\w\8\0\q\n\b\1\6\c\f\f\j\o\0\h\9\m\i\k\8\m\w\w\7\j\h\2\g\m\p\u\w\v\j\5\8\m\w\u\z\l\8\h\o\a\r\b\u\1\w\5\p\h\y\r\5\7\0\3\x\v\p\e\j\j\0\r\9\x\r\m\o\t\0\a\q\z\t\s\j\5\x\c\l\m\y\i\w\d\e\j\8\s\5\1\j\l\0\3\k\9\k\f\v\x\6\a\l\v\l\8\5\3\h\8\i\e\n\c\l\8\7\6\o\h\t\h\5\2\1\n\3\u\x\5\0\6\p\b\2\b\y\3\7\0\b\t\8\8\g\m\f\r\8\m\v\v\p\7\w\s\w\s\3\7\e\v\p\l\u\r\a\x\o\j\c\3\b\8\2\o\x\q\5\p\6\n\b\b\3\f\z\x\e\2\5\2\u\w\s\8\l\7\i\z\y\y\y\k\f\g\p\q ]] 00:25:51.607 21:48:12 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:51.607 21:48:12 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:51.866 [2024-12-06 21:48:12.115488] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:51.866 [2024-12-06 21:48:12.116058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88899 ] 00:25:51.866 [2024-12-06 21:48:12.283270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.125 [2024-12-06 21:48:12.435085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:52.385  [2024-12-06T21:48:13.819Z] Copying: 512/512 [B] (average 250 kBps) 00:25:53.322 00:25:53.322 21:48:13 -- dd/posix.sh@93 -- # [[ vv53zxtuowrpwcyi6pwirj7aoli4j6llsjca93n9th7oziifuwh45fiph99lxk92ng72q9xeblyumkaq00wdequoh5wng1ki0oxv3ve44pqftal2isw5mn5gii1sdoqowf89mn7uxo8m20luper94eaxfh33j0dlmr11dig63on4g4w6id04qdq1j8olqi5fbczrme9hwhot2kqcq5ftbv9ynan1zrmlpw5bzrkfs1lyvhvxkwmnuwjl7k9n9xqhqvlqzcb7tspvexzxqu5asqabc65z9hq8e48ud4cwr82w80qnb16cffjo0h9mik8mww7jh2gmpuwvj58mwuzl8hoarbu1w5phyr5703xvpejj0r9xrmot0aqztsj5xclmyiwdej8s51jl03k9kfvx6alvl853h8iencl876ohth521n3ux506pb2by370bt88gmfr8mvvp7wsws37evpluraxojc3b82oxq5p6nbb3fzxe252uws8l7izyyykfgpq == \v\v\5\3\z\x\t\u\o\w\r\p\w\c\y\i\6\p\w\i\r\j\7\a\o\l\i\4\j\6\l\l\s\j\c\a\9\3\n\9\t\h\7\o\z\i\i\f\u\w\h\4\5\f\i\p\h\9\9\l\x\k\9\2\n\g\7\2\q\9\x\e\b\l\y\u\m\k\a\q\0\0\w\d\e\q\u\o\h\5\w\n\g\1\k\i\0\o\x\v\3\v\e\4\4\p\q\f\t\a\l\2\i\s\w\5\m\n\5\g\i\i\1\s\d\o\q\o\w\f\8\9\m\n\7\u\x\o\8\m\2\0\l\u\p\e\r\9\4\e\a\x\f\h\3\3\j\0\d\l\m\r\1\1\d\i\g\6\3\o\n\4\g\4\w\6\i\d\0\4\q\d\q\1\j\8\o\l\q\i\5\f\b\c\z\r\m\e\9\h\w\h\o\t\2\k\q\c\q\5\f\t\b\v\9\y\n\a\n\1\z\r\m\l\p\w\5\b\z\r\k\f\s\1\l\y\v\h\v\x\k\w\m\n\u\w\j\l\7\k\9\n\9\x\q\h\q\v\l\q\z\c\b\7\t\s\p\v\e\x\z\x\q\u\5\a\s\q\a\b\c\6\5\z\9\h\q\8\e\4\8\u\d\4\c\w\r\8\2\w\8\0\q\n\b\1\6\c\f\f\j\o\0\h\9\m\i\k\8\m\w\w\7\j\h\2\g\m\p\u\w\v\j\5\8\m\w\u\z\l\8\h\o\a\r\b\u\1\w\5\p\h\y\r\5\7\0\3\x\v\p\e\j\j\0\r\9\x\r\m\o\t\0\a\q\z\t\s\j\5\x\c\l\m\y\i\w\d\e\j\8\s\5\1\j\l\0\3\k\9\k\f\v\x\6\a\l\v\l\8\5\3\h\8\i\e\n\c\l\8\7\6\o\h\t\h\5\2\1\n\3\u\x\5\0\6\p\b\2\b\y\3\7\0\b\t\8\8\g\m\f\r\8\m\v\v\p\7\w\s\w\s\3\7\e\v\p\l\u\r\a\x\o\j\c\3\b\8\2\o\x\q\5\p\6\n\b\b\3\f\z\x\e\2\5\2\u\w\s\8\l\7\i\z\y\y\y\k\f\g\p\q ]] 00:25:53.322 21:48:13 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:25:53.322 21:48:13 -- dd/posix.sh@86 -- # gen_bytes 512 00:25:53.322 21:48:13 -- dd/common.sh@98 -- # xtrace_disable 00:25:53.322 21:48:13 -- common/autotest_common.sh@10 -- # set +x 00:25:53.322 21:48:13 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:53.322 21:48:13 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:25:53.322 [2024-12-06 21:48:13.650066] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:53.322 [2024-12-06 21:48:13.650232] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88925 ] 00:25:53.581 [2024-12-06 21:48:13.819305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.581 [2024-12-06 21:48:13.970719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.840  [2024-12-06T21:48:15.271Z] Copying: 512/512 [B] (average 500 kBps) 00:25:54.774 00:25:54.774 21:48:15 -- dd/posix.sh@93 -- # [[ ohq1bf4tqchlrify8lgxh3yviehvnesegj90l3abuo7b4uiifxkx4dopubq8li090l4dq1r1p5yxmciwbiqoq2tjkysyiphhgtcysr3qrrrtptx3lcin14kopvaqai4r09380pbslu3bdml8fwdnhvpwzysr1a1tc4mxqhxzbbngo1fwcagqgpe4b5p8fbsrc666vi2oxrprpy94duew969zepet68rvtgvxc0cm8f1205supr4e5g2uwpcjg2ukl5suipunt70immrbdjk6p6w2bsvzfmex8hhnanqh12gbaew89qh0b4acv5v3gf641hmjvue6ejp2ambudzircos7yaxl6ff0hk8wv53b0oiw37imctrroclrzug6wq7uo5ls4ieijirw2drjfhl3uxz7sshxblju41a5nmbr9x1opefzwmhopu4uzkj6ikgchzdvheyb4k91s9q8dp2qyn9lqqqdz4ih8kn6dujyhfqbe87fozmu7dtzleal2ffw == \o\h\q\1\b\f\4\t\q\c\h\l\r\i\f\y\8\l\g\x\h\3\y\v\i\e\h\v\n\e\s\e\g\j\9\0\l\3\a\b\u\o\7\b\4\u\i\i\f\x\k\x\4\d\o\p\u\b\q\8\l\i\0\9\0\l\4\d\q\1\r\1\p\5\y\x\m\c\i\w\b\i\q\o\q\2\t\j\k\y\s\y\i\p\h\h\g\t\c\y\s\r\3\q\r\r\r\t\p\t\x\3\l\c\i\n\1\4\k\o\p\v\a\q\a\i\4\r\0\9\3\8\0\p\b\s\l\u\3\b\d\m\l\8\f\w\d\n\h\v\p\w\z\y\s\r\1\a\1\t\c\4\m\x\q\h\x\z\b\b\n\g\o\1\f\w\c\a\g\q\g\p\e\4\b\5\p\8\f\b\s\r\c\6\6\6\v\i\2\o\x\r\p\r\p\y\9\4\d\u\e\w\9\6\9\z\e\p\e\t\6\8\r\v\t\g\v\x\c\0\c\m\8\f\1\2\0\5\s\u\p\r\4\e\5\g\2\u\w\p\c\j\g\2\u\k\l\5\s\u\i\p\u\n\t\7\0\i\m\m\r\b\d\j\k\6\p\6\w\2\b\s\v\z\f\m\e\x\8\h\h\n\a\n\q\h\1\2\g\b\a\e\w\8\9\q\h\0\b\4\a\c\v\5\v\3\g\f\6\4\1\h\m\j\v\u\e\6\e\j\p\2\a\m\b\u\d\z\i\r\c\o\s\7\y\a\x\l\6\f\f\0\h\k\8\w\v\5\3\b\0\o\i\w\3\7\i\m\c\t\r\r\o\c\l\r\z\u\g\6\w\q\7\u\o\5\l\s\4\i\e\i\j\i\r\w\2\d\r\j\f\h\l\3\u\x\z\7\s\s\h\x\b\l\j\u\4\1\a\5\n\m\b\r\9\x\1\o\p\e\f\z\w\m\h\o\p\u\4\u\z\k\j\6\i\k\g\c\h\z\d\v\h\e\y\b\4\k\9\1\s\9\q\8\d\p\2\q\y\n\9\l\q\q\q\d\z\4\i\h\8\k\n\6\d\u\j\y\h\f\q\b\e\8\7\f\o\z\m\u\7\d\t\z\l\e\a\l\2\f\f\w ]] 00:25:54.774 21:48:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:54.774 21:48:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:25:54.774 [2024-12-06 21:48:15.159316] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:54.774 [2024-12-06 21:48:15.159498] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88939 ] 00:25:55.031 [2024-12-06 21:48:15.329641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.031 [2024-12-06 21:48:15.479958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.289  [2024-12-06T21:48:16.720Z] Copying: 512/512 [B] (average 500 kBps) 00:25:56.223 00:25:56.223 21:48:16 -- dd/posix.sh@93 -- # [[ ohq1bf4tqchlrify8lgxh3yviehvnesegj90l3abuo7b4uiifxkx4dopubq8li090l4dq1r1p5yxmciwbiqoq2tjkysyiphhgtcysr3qrrrtptx3lcin14kopvaqai4r09380pbslu3bdml8fwdnhvpwzysr1a1tc4mxqhxzbbngo1fwcagqgpe4b5p8fbsrc666vi2oxrprpy94duew969zepet68rvtgvxc0cm8f1205supr4e5g2uwpcjg2ukl5suipunt70immrbdjk6p6w2bsvzfmex8hhnanqh12gbaew89qh0b4acv5v3gf641hmjvue6ejp2ambudzircos7yaxl6ff0hk8wv53b0oiw37imctrroclrzug6wq7uo5ls4ieijirw2drjfhl3uxz7sshxblju41a5nmbr9x1opefzwmhopu4uzkj6ikgchzdvheyb4k91s9q8dp2qyn9lqqqdz4ih8kn6dujyhfqbe87fozmu7dtzleal2ffw == \o\h\q\1\b\f\4\t\q\c\h\l\r\i\f\y\8\l\g\x\h\3\y\v\i\e\h\v\n\e\s\e\g\j\9\0\l\3\a\b\u\o\7\b\4\u\i\i\f\x\k\x\4\d\o\p\u\b\q\8\l\i\0\9\0\l\4\d\q\1\r\1\p\5\y\x\m\c\i\w\b\i\q\o\q\2\t\j\k\y\s\y\i\p\h\h\g\t\c\y\s\r\3\q\r\r\r\t\p\t\x\3\l\c\i\n\1\4\k\o\p\v\a\q\a\i\4\r\0\9\3\8\0\p\b\s\l\u\3\b\d\m\l\8\f\w\d\n\h\v\p\w\z\y\s\r\1\a\1\t\c\4\m\x\q\h\x\z\b\b\n\g\o\1\f\w\c\a\g\q\g\p\e\4\b\5\p\8\f\b\s\r\c\6\6\6\v\i\2\o\x\r\p\r\p\y\9\4\d\u\e\w\9\6\9\z\e\p\e\t\6\8\r\v\t\g\v\x\c\0\c\m\8\f\1\2\0\5\s\u\p\r\4\e\5\g\2\u\w\p\c\j\g\2\u\k\l\5\s\u\i\p\u\n\t\7\0\i\m\m\r\b\d\j\k\6\p\6\w\2\b\s\v\z\f\m\e\x\8\h\h\n\a\n\q\h\1\2\g\b\a\e\w\8\9\q\h\0\b\4\a\c\v\5\v\3\g\f\6\4\1\h\m\j\v\u\e\6\e\j\p\2\a\m\b\u\d\z\i\r\c\o\s\7\y\a\x\l\6\f\f\0\h\k\8\w\v\5\3\b\0\o\i\w\3\7\i\m\c\t\r\r\o\c\l\r\z\u\g\6\w\q\7\u\o\5\l\s\4\i\e\i\j\i\r\w\2\d\r\j\f\h\l\3\u\x\z\7\s\s\h\x\b\l\j\u\4\1\a\5\n\m\b\r\9\x\1\o\p\e\f\z\w\m\h\o\p\u\4\u\z\k\j\6\i\k\g\c\h\z\d\v\h\e\y\b\4\k\9\1\s\9\q\8\d\p\2\q\y\n\9\l\q\q\q\d\z\4\i\h\8\k\n\6\d\u\j\y\h\f\q\b\e\8\7\f\o\z\m\u\7\d\t\z\l\e\a\l\2\f\f\w ]] 00:25:56.223 21:48:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:56.223 21:48:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:25:56.223 [2024-12-06 21:48:16.670989] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:56.223 [2024-12-06 21:48:16.671144] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88959 ] 00:25:56.480 [2024-12-06 21:48:16.840076] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.739 [2024-12-06 21:48:16.990851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.739  [2024-12-06T21:48:18.171Z] Copying: 512/512 [B] (average 125 kBps) 00:25:57.674 00:25:57.675 21:48:18 -- dd/posix.sh@93 -- # [[ ohq1bf4tqchlrify8lgxh3yviehvnesegj90l3abuo7b4uiifxkx4dopubq8li090l4dq1r1p5yxmciwbiqoq2tjkysyiphhgtcysr3qrrrtptx3lcin14kopvaqai4r09380pbslu3bdml8fwdnhvpwzysr1a1tc4mxqhxzbbngo1fwcagqgpe4b5p8fbsrc666vi2oxrprpy94duew969zepet68rvtgvxc0cm8f1205supr4e5g2uwpcjg2ukl5suipunt70immrbdjk6p6w2bsvzfmex8hhnanqh12gbaew89qh0b4acv5v3gf641hmjvue6ejp2ambudzircos7yaxl6ff0hk8wv53b0oiw37imctrroclrzug6wq7uo5ls4ieijirw2drjfhl3uxz7sshxblju41a5nmbr9x1opefzwmhopu4uzkj6ikgchzdvheyb4k91s9q8dp2qyn9lqqqdz4ih8kn6dujyhfqbe87fozmu7dtzleal2ffw == \o\h\q\1\b\f\4\t\q\c\h\l\r\i\f\y\8\l\g\x\h\3\y\v\i\e\h\v\n\e\s\e\g\j\9\0\l\3\a\b\u\o\7\b\4\u\i\i\f\x\k\x\4\d\o\p\u\b\q\8\l\i\0\9\0\l\4\d\q\1\r\1\p\5\y\x\m\c\i\w\b\i\q\o\q\2\t\j\k\y\s\y\i\p\h\h\g\t\c\y\s\r\3\q\r\r\r\t\p\t\x\3\l\c\i\n\1\4\k\o\p\v\a\q\a\i\4\r\0\9\3\8\0\p\b\s\l\u\3\b\d\m\l\8\f\w\d\n\h\v\p\w\z\y\s\r\1\a\1\t\c\4\m\x\q\h\x\z\b\b\n\g\o\1\f\w\c\a\g\q\g\p\e\4\b\5\p\8\f\b\s\r\c\6\6\6\v\i\2\o\x\r\p\r\p\y\9\4\d\u\e\w\9\6\9\z\e\p\e\t\6\8\r\v\t\g\v\x\c\0\c\m\8\f\1\2\0\5\s\u\p\r\4\e\5\g\2\u\w\p\c\j\g\2\u\k\l\5\s\u\i\p\u\n\t\7\0\i\m\m\r\b\d\j\k\6\p\6\w\2\b\s\v\z\f\m\e\x\8\h\h\n\a\n\q\h\1\2\g\b\a\e\w\8\9\q\h\0\b\4\a\c\v\5\v\3\g\f\6\4\1\h\m\j\v\u\e\6\e\j\p\2\a\m\b\u\d\z\i\r\c\o\s\7\y\a\x\l\6\f\f\0\h\k\8\w\v\5\3\b\0\o\i\w\3\7\i\m\c\t\r\r\o\c\l\r\z\u\g\6\w\q\7\u\o\5\l\s\4\i\e\i\j\i\r\w\2\d\r\j\f\h\l\3\u\x\z\7\s\s\h\x\b\l\j\u\4\1\a\5\n\m\b\r\9\x\1\o\p\e\f\z\w\m\h\o\p\u\4\u\z\k\j\6\i\k\g\c\h\z\d\v\h\e\y\b\4\k\9\1\s\9\q\8\d\p\2\q\y\n\9\l\q\q\q\d\z\4\i\h\8\k\n\6\d\u\j\y\h\f\q\b\e\8\7\f\o\z\m\u\7\d\t\z\l\e\a\l\2\f\f\w ]] 00:25:57.675 21:48:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:25:57.675 21:48:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:25:57.934 [2024-12-06 21:48:18.183923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:57.934 [2024-12-06 21:48:18.184077] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88973 ] 00:25:57.934 [2024-12-06 21:48:18.349352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.193 [2024-12-06 21:48:18.503084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.452  [2024-12-06T21:48:19.888Z] Copying: 512/512 [B] (average 125 kBps) 00:25:59.391 00:25:59.391 ************************************ 00:25:59.391 END TEST dd_flags_misc 00:25:59.391 ************************************ 00:25:59.392 21:48:19 -- dd/posix.sh@93 -- # [[ ohq1bf4tqchlrify8lgxh3yviehvnesegj90l3abuo7b4uiifxkx4dopubq8li090l4dq1r1p5yxmciwbiqoq2tjkysyiphhgtcysr3qrrrtptx3lcin14kopvaqai4r09380pbslu3bdml8fwdnhvpwzysr1a1tc4mxqhxzbbngo1fwcagqgpe4b5p8fbsrc666vi2oxrprpy94duew969zepet68rvtgvxc0cm8f1205supr4e5g2uwpcjg2ukl5suipunt70immrbdjk6p6w2bsvzfmex8hhnanqh12gbaew89qh0b4acv5v3gf641hmjvue6ejp2ambudzircos7yaxl6ff0hk8wv53b0oiw37imctrroclrzug6wq7uo5ls4ieijirw2drjfhl3uxz7sshxblju41a5nmbr9x1opefzwmhopu4uzkj6ikgchzdvheyb4k91s9q8dp2qyn9lqqqdz4ih8kn6dujyhfqbe87fozmu7dtzleal2ffw == \o\h\q\1\b\f\4\t\q\c\h\l\r\i\f\y\8\l\g\x\h\3\y\v\i\e\h\v\n\e\s\e\g\j\9\0\l\3\a\b\u\o\7\b\4\u\i\i\f\x\k\x\4\d\o\p\u\b\q\8\l\i\0\9\0\l\4\d\q\1\r\1\p\5\y\x\m\c\i\w\b\i\q\o\q\2\t\j\k\y\s\y\i\p\h\h\g\t\c\y\s\r\3\q\r\r\r\t\p\t\x\3\l\c\i\n\1\4\k\o\p\v\a\q\a\i\4\r\0\9\3\8\0\p\b\s\l\u\3\b\d\m\l\8\f\w\d\n\h\v\p\w\z\y\s\r\1\a\1\t\c\4\m\x\q\h\x\z\b\b\n\g\o\1\f\w\c\a\g\q\g\p\e\4\b\5\p\8\f\b\s\r\c\6\6\6\v\i\2\o\x\r\p\r\p\y\9\4\d\u\e\w\9\6\9\z\e\p\e\t\6\8\r\v\t\g\v\x\c\0\c\m\8\f\1\2\0\5\s\u\p\r\4\e\5\g\2\u\w\p\c\j\g\2\u\k\l\5\s\u\i\p\u\n\t\7\0\i\m\m\r\b\d\j\k\6\p\6\w\2\b\s\v\z\f\m\e\x\8\h\h\n\a\n\q\h\1\2\g\b\a\e\w\8\9\q\h\0\b\4\a\c\v\5\v\3\g\f\6\4\1\h\m\j\v\u\e\6\e\j\p\2\a\m\b\u\d\z\i\r\c\o\s\7\y\a\x\l\6\f\f\0\h\k\8\w\v\5\3\b\0\o\i\w\3\7\i\m\c\t\r\r\o\c\l\r\z\u\g\6\w\q\7\u\o\5\l\s\4\i\e\i\j\i\r\w\2\d\r\j\f\h\l\3\u\x\z\7\s\s\h\x\b\l\j\u\4\1\a\5\n\m\b\r\9\x\1\o\p\e\f\z\w\m\h\o\p\u\4\u\z\k\j\6\i\k\g\c\h\z\d\v\h\e\y\b\4\k\9\1\s\9\q\8\d\p\2\q\y\n\9\l\q\q\q\d\z\4\i\h\8\k\n\6\d\u\j\y\h\f\q\b\e\8\7\f\o\z\m\u\7\d\t\z\l\e\a\l\2\f\f\w ]] 00:25:59.392 00:25:59.392 real 0m12.107s 00:25:59.392 user 0m9.660s 00:25:59.392 sys 0m1.517s 00:25:59.392 21:48:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:59.392 21:48:19 -- common/autotest_common.sh@10 -- # set +x 00:25:59.392 21:48:19 -- dd/posix.sh@131 -- # tests_forced_aio 00:25:59.392 21:48:19 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:25:59.392 * Second test run, disabling liburing, forcing AIO 00:25:59.392 21:48:19 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:25:59.392 21:48:19 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:25:59.392 21:48:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:59.392 21:48:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:59.392 21:48:19 -- common/autotest_common.sh@10 -- # set +x 00:25:59.392 ************************************ 00:25:59.392 START TEST dd_flag_append_forced_aio 00:25:59.392 ************************************ 00:25:59.392 21:48:19 -- common/autotest_common.sh@1114 -- # append 00:25:59.392 21:48:19 -- dd/posix.sh@16 -- # local dump0 00:25:59.392 21:48:19 -- dd/posix.sh@17 -- # local dump1 00:25:59.392 21:48:19 -- dd/posix.sh@19 -- # gen_bytes 32 00:25:59.392 21:48:19 -- dd/common.sh@98 -- # xtrace_disable 00:25:59.392 21:48:19 -- common/autotest_common.sh@10 -- # set +x 00:25:59.392 21:48:19 -- dd/posix.sh@19 -- # dump0=imdtgn524pazsoo8kbxgii8oaui9im87 00:25:59.392 21:48:19 -- dd/posix.sh@20 -- # gen_bytes 32 00:25:59.392 21:48:19 -- dd/common.sh@98 -- # xtrace_disable 00:25:59.392 21:48:19 -- common/autotest_common.sh@10 -- # set +x 00:25:59.392 21:48:19 -- dd/posix.sh@20 -- # dump1=lnse3u869uh5oq0jscx9929gj2xmpqgd 00:25:59.392 21:48:19 -- dd/posix.sh@22 -- # printf %s imdtgn524pazsoo8kbxgii8oaui9im87 00:25:59.392 21:48:19 -- dd/posix.sh@23 -- # printf %s lnse3u869uh5oq0jscx9929gj2xmpqgd 00:25:59.392 21:48:19 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:25:59.392 [2024-12-06 21:48:19.756996] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:59.392 [2024-12-06 21:48:19.757141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89016 ] 00:25:59.651 [2024-12-06 21:48:19.927596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.651 [2024-12-06 21:48:20.083856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.911  [2024-12-06T21:48:21.346Z] Copying: 32/32 [B] (average 31 kBps) 00:26:00.849 00:26:00.849 21:48:21 -- dd/posix.sh@27 -- # [[ lnse3u869uh5oq0jscx9929gj2xmpqgdimdtgn524pazsoo8kbxgii8oaui9im87 == \l\n\s\e\3\u\8\6\9\u\h\5\o\q\0\j\s\c\x\9\9\2\9\g\j\2\x\m\p\q\g\d\i\m\d\t\g\n\5\2\4\p\a\z\s\o\o\8\k\b\x\g\i\i\8\o\a\u\i\9\i\m\8\7 ]] 00:26:00.849 00:26:00.849 real 0m1.528s 00:26:00.849 user 0m1.221s 00:26:00.849 sys 0m0.195s 00:26:00.849 21:48:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:00.849 ************************************ 00:26:00.849 21:48:21 -- common/autotest_common.sh@10 -- # set +x 00:26:00.849 END TEST dd_flag_append_forced_aio 00:26:00.849 ************************************ 00:26:00.849 21:48:21 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:26:00.849 21:48:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:00.849 21:48:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:00.849 21:48:21 -- common/autotest_common.sh@10 -- # set +x 00:26:00.849 ************************************ 00:26:00.849 START TEST dd_flag_directory_forced_aio 00:26:00.849 ************************************ 00:26:00.849 21:48:21 -- common/autotest_common.sh@1114 -- # directory 00:26:00.849 21:48:21 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:00.849 21:48:21 -- common/autotest_common.sh@650 -- # local es=0 00:26:00.849 21:48:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:00.849 21:48:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.849 21:48:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.849 21:48:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.849 21:48:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.849 21:48:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.849 21:48:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:00.849 21:48:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:00.849 21:48:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:00.849 21:48:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:00.849 [2024-12-06 21:48:21.333056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:00.849 [2024-12-06 21:48:21.333210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89050 ] 00:26:01.113 [2024-12-06 21:48:21.502514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.372 [2024-12-06 21:48:21.655045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.632 [2024-12-06 21:48:21.873819] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:01.632 [2024-12-06 21:48:21.873895] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:01.632 [2024-12-06 21:48:21.873913] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:02.200 [2024-12-06 21:48:22.433082] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:02.460 21:48:22 -- common/autotest_common.sh@653 -- # es=236 00:26:02.460 21:48:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:02.460 21:48:22 -- common/autotest_common.sh@662 -- # es=108 00:26:02.460 21:48:22 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:02.460 21:48:22 -- common/autotest_common.sh@670 -- # es=1 00:26:02.460 21:48:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:02.460 21:48:22 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:02.460 21:48:22 -- common/autotest_common.sh@650 -- # local es=0 00:26:02.460 21:48:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:02.460 21:48:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:02.460 21:48:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.460 21:48:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:02.460 21:48:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.460 21:48:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:02.460 21:48:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:02.460 21:48:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:02.460 21:48:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:02.460 21:48:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:26:02.460 [2024-12-06 21:48:22.842886] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:02.460 [2024-12-06 21:48:22.843040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89072 ] 00:26:02.720 [2024-12-06 21:48:23.012325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.720 [2024-12-06 21:48:23.164198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.980 [2024-12-06 21:48:23.387634] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:02.980 [2024-12-06 21:48:23.387709] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:26:02.980 [2024-12-06 21:48:23.387728] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:03.552 [2024-12-06 21:48:23.940837] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:03.811 21:48:24 -- common/autotest_common.sh@653 -- # es=236 00:26:03.811 21:48:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:03.811 21:48:24 -- common/autotest_common.sh@662 -- # es=108 00:26:03.811 21:48:24 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:03.811 21:48:24 -- common/autotest_common.sh@670 -- # es=1 00:26:03.811 21:48:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:03.811 00:26:03.811 real 0m3.011s 00:26:03.811 user 0m2.434s 00:26:03.811 sys 0m0.376s 00:26:03.811 21:48:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:03.811 ************************************ 00:26:03.811 END TEST dd_flag_directory_forced_aio 00:26:03.811 21:48:24 -- common/autotest_common.sh@10 -- # set +x 00:26:03.811 ************************************ 00:26:04.070 21:48:24 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:26:04.070 21:48:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:04.070 21:48:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.070 21:48:24 -- common/autotest_common.sh@10 -- # set +x 00:26:04.070 ************************************ 00:26:04.070 START TEST dd_flag_nofollow_forced_aio 00:26:04.070 ************************************ 00:26:04.070 21:48:24 -- common/autotest_common.sh@1114 -- # nofollow 00:26:04.070 21:48:24 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:04.070 21:48:24 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:04.070 21:48:24 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:04.070 21:48:24 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:04.070 21:48:24 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:04.070 21:48:24 -- common/autotest_common.sh@650 -- # local es=0 00:26:04.070 21:48:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:04.070 21:48:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.070 21:48:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.070 21:48:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.070 21:48:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.070 21:48:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.070 21:48:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:04.070 21:48:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:04.070 21:48:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:04.070 21:48:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:04.070 [2024-12-06 21:48:24.387759] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:04.070 [2024-12-06 21:48:24.387882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89107 ] 00:26:04.070 [2024-12-06 21:48:24.540831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.330 [2024-12-06 21:48:24.695709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.589 [2024-12-06 21:48:24.918600] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:04.589 [2024-12-06 21:48:24.918679] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:26:04.589 [2024-12-06 21:48:24.918698] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:05.155 [2024-12-06 21:48:25.463534] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:05.414 21:48:25 -- common/autotest_common.sh@653 -- # es=216 00:26:05.414 21:48:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:05.414 21:48:25 -- common/autotest_common.sh@662 -- # es=88 00:26:05.414 21:48:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:05.414 21:48:25 -- common/autotest_common.sh@670 -- # es=1 00:26:05.414 21:48:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:05.414 21:48:25 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:05.414 21:48:25 -- common/autotest_common.sh@650 -- # local es=0 00:26:05.414 21:48:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:05.414 21:48:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.414 21:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.414 21:48:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.414 21:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.414 21:48:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.414 21:48:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.414 21:48:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:05.414 21:48:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:05.414 21:48:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:26:05.414 [2024-12-06 21:48:25.861073] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:05.414 [2024-12-06 21:48:25.861215] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89133 ] 00:26:05.672 [2024-12-06 21:48:26.005967] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.672 [2024-12-06 21:48:26.154119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.930 [2024-12-06 21:48:26.371636] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:05.930 [2024-12-06 21:48:26.371712] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:26:05.930 [2024-12-06 21:48:26.371732] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:06.498 [2024-12-06 21:48:26.921412] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:26:07.110 21:48:27 -- common/autotest_common.sh@653 -- # es=216 00:26:07.110 21:48:27 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:07.110 21:48:27 -- common/autotest_common.sh@662 -- # es=88 00:26:07.110 21:48:27 -- common/autotest_common.sh@663 -- # case "$es" in 00:26:07.110 21:48:27 -- common/autotest_common.sh@670 -- # es=1 00:26:07.110 21:48:27 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:07.110 21:48:27 -- dd/posix.sh@46 -- # gen_bytes 512 00:26:07.110 21:48:27 -- dd/common.sh@98 -- # xtrace_disable 00:26:07.110 21:48:27 -- common/autotest_common.sh@10 -- # set +x 00:26:07.110 21:48:27 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:07.110 [2024-12-06 21:48:27.351400] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:07.110 [2024-12-06 21:48:27.351578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89148 ] 00:26:07.110 [2024-12-06 21:48:27.521035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.383 [2024-12-06 21:48:27.675990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:07.655  [2024-12-06T21:48:29.087Z] Copying: 512/512 [B] (average 500 kBps) 00:26:08.590 00:26:08.590 21:48:28 -- dd/posix.sh@49 -- # [[ 4kya1udtv8hf61xxa1ajku4t2j66zjcukmxf54lr78lg2xetc97rcznqlq56193atv8jv59jl37w62bp4nzpi2mukir7vpnqo7li0pnfpyw7w69cain0pxn7uoarzo4ab9uxzpq20oja6aegfadqfjl0l1vivhzh9zsf9vzlxphkb9jbqinqgcbk54svlefifjpc35arqhbyji1lawm847mqzn0wwa05k9klptpprzwlp0x6wud8dl50so3bomgxvbcjftgsnng65ivtwymaq0aju0pjjsq8mcf6zriomsr5bfsbvc1iep39zmu0rto7vm8evf9rt4j9hh2ypp0juq2v2gtbg7ry6lycf8s1ugdlf228q077qgcr0fmfhywvq095xoe1nxrtz3ux613s9h53j3oqp9e12hzeuwez8orl2bvkyt5x7ucer6iqoccy4r8dc8weofj0nzmk12dm7mas90pwob53krjqkysj94aak5qpmf9oj22y219fy3jf == \4\k\y\a\1\u\d\t\v\8\h\f\6\1\x\x\a\1\a\j\k\u\4\t\2\j\6\6\z\j\c\u\k\m\x\f\5\4\l\r\7\8\l\g\2\x\e\t\c\9\7\r\c\z\n\q\l\q\5\6\1\9\3\a\t\v\8\j\v\5\9\j\l\3\7\w\6\2\b\p\4\n\z\p\i\2\m\u\k\i\r\7\v\p\n\q\o\7\l\i\0\p\n\f\p\y\w\7\w\6\9\c\a\i\n\0\p\x\n\7\u\o\a\r\z\o\4\a\b\9\u\x\z\p\q\2\0\o\j\a\6\a\e\g\f\a\d\q\f\j\l\0\l\1\v\i\v\h\z\h\9\z\s\f\9\v\z\l\x\p\h\k\b\9\j\b\q\i\n\q\g\c\b\k\5\4\s\v\l\e\f\i\f\j\p\c\3\5\a\r\q\h\b\y\j\i\1\l\a\w\m\8\4\7\m\q\z\n\0\w\w\a\0\5\k\9\k\l\p\t\p\p\r\z\w\l\p\0\x\6\w\u\d\8\d\l\5\0\s\o\3\b\o\m\g\x\v\b\c\j\f\t\g\s\n\n\g\6\5\i\v\t\w\y\m\a\q\0\a\j\u\0\p\j\j\s\q\8\m\c\f\6\z\r\i\o\m\s\r\5\b\f\s\b\v\c\1\i\e\p\3\9\z\m\u\0\r\t\o\7\v\m\8\e\v\f\9\r\t\4\j\9\h\h\2\y\p\p\0\j\u\q\2\v\2\g\t\b\g\7\r\y\6\l\y\c\f\8\s\1\u\g\d\l\f\2\2\8\q\0\7\7\q\g\c\r\0\f\m\f\h\y\w\v\q\0\9\5\x\o\e\1\n\x\r\t\z\3\u\x\6\1\3\s\9\h\5\3\j\3\o\q\p\9\e\1\2\h\z\e\u\w\e\z\8\o\r\l\2\b\v\k\y\t\5\x\7\u\c\e\r\6\i\q\o\c\c\y\4\r\8\d\c\8\w\e\o\f\j\0\n\z\m\k\1\2\d\m\7\m\a\s\9\0\p\w\o\b\5\3\k\r\j\q\k\y\s\j\9\4\a\a\k\5\q\p\m\f\9\o\j\2\2\y\2\1\9\f\y\3\j\f ]] 00:26:08.590 00:26:08.590 real 0m4.482s 00:26:08.590 user 0m3.617s 00:26:08.590 sys 0m0.551s 00:26:08.590 21:48:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:08.590 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:26:08.591 ************************************ 00:26:08.591 END TEST dd_flag_nofollow_forced_aio 00:26:08.591 ************************************ 00:26:08.591 21:48:28 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:26:08.591 21:48:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:08.591 21:48:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:08.591 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:26:08.591 ************************************ 00:26:08.591 START TEST dd_flag_noatime_forced_aio 00:26:08.591 ************************************ 00:26:08.591 21:48:28 -- common/autotest_common.sh@1114 -- # noatime 00:26:08.591 21:48:28 -- dd/posix.sh@53 -- # local atime_if 00:26:08.591 21:48:28 -- dd/posix.sh@54 -- # local atime_of 00:26:08.591 21:48:28 -- dd/posix.sh@58 -- # gen_bytes 512 00:26:08.591 21:48:28 -- dd/common.sh@98 -- # xtrace_disable 00:26:08.591 21:48:28 -- common/autotest_common.sh@10 -- # set +x 00:26:08.591 21:48:28 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:08.591 21:48:28 -- dd/posix.sh@60 -- # atime_if=1733521707 00:26:08.591 21:48:28 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:08.591 21:48:28 -- dd/posix.sh@61 -- # atime_of=1733521708 00:26:08.591 21:48:28 -- dd/posix.sh@66 -- # sleep 1 00:26:09.528 21:48:29 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:09.528 [2024-12-06 21:48:29.953672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:09.528 [2024-12-06 21:48:29.953838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89201 ] 00:26:09.787 [2024-12-06 21:48:30.122871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.787 [2024-12-06 21:48:30.271851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.046  [2024-12-06T21:48:31.479Z] Copying: 512/512 [B] (average 500 kBps) 00:26:10.982 00:26:10.982 21:48:31 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:10.982 21:48:31 -- dd/posix.sh@69 -- # (( atime_if == 1733521707 )) 00:26:10.982 21:48:31 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:10.982 21:48:31 -- dd/posix.sh@70 -- # (( atime_of == 1733521708 )) 00:26:10.982 21:48:31 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:10.982 [2024-12-06 21:48:31.456192] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:10.982 [2024-12-06 21:48:31.456320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89219 ] 00:26:11.242 [2024-12-06 21:48:31.608476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.500 [2024-12-06 21:48:31.756997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.500  [2024-12-06T21:48:32.931Z] Copying: 512/512 [B] (average 500 kBps) 00:26:12.434 00:26:12.434 21:48:32 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:12.434 21:48:32 -- dd/posix.sh@73 -- # (( atime_if < 1733521711 )) 00:26:12.434 00:26:12.434 real 0m4.024s 00:26:12.434 user 0m2.411s 00:26:12.434 sys 0m0.386s 00:26:12.434 21:48:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:12.434 21:48:32 -- common/autotest_common.sh@10 -- # set +x 00:26:12.434 ************************************ 00:26:12.434 END TEST dd_flag_noatime_forced_aio 00:26:12.434 ************************************ 00:26:12.694 21:48:32 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:26:12.694 21:48:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:12.694 21:48:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.694 21:48:32 -- common/autotest_common.sh@10 -- # set +x 00:26:12.694 ************************************ 00:26:12.694 START TEST dd_flags_misc_forced_aio 00:26:12.694 ************************************ 00:26:12.694 21:48:32 -- common/autotest_common.sh@1114 -- # io 00:26:12.694 21:48:32 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:26:12.694 21:48:32 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:26:12.694 21:48:32 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:26:12.694 21:48:32 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:12.694 21:48:32 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:12.694 21:48:32 -- dd/common.sh@98 -- # xtrace_disable 00:26:12.694 21:48:32 -- common/autotest_common.sh@10 -- # set +x 00:26:12.694 21:48:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:12.694 21:48:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:12.694 [2024-12-06 21:48:32.995188] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:12.694 [2024-12-06 21:48:32.995320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89258 ] 00:26:12.694 [2024-12-06 21:48:33.144647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.953 [2024-12-06 21:48:33.298365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.211  [2024-12-06T21:48:34.645Z] Copying: 512/512 [B] (average 500 kBps) 00:26:14.148 00:26:14.148 21:48:34 -- dd/posix.sh@93 -- # [[ 56qxep8s52lpc4ytieknrsujh4l3j0i9dst8ol252p8pfgejicmtlx2zwuxyov71ldet06hmyyvvqeswwioq6roh2xi42tz5dfgnofn1h8l6h3tizce5c9euhorevkqffw1lyg4f5l2c2isad04obuh394jqmdakbspvlum6ohmzf8f6wlwmgo5volrefohy2d3fcyxphf1td2p9osm0yr8umnh2ne3fga1t2o5jzdgw91mswc4h4s56u95gydqekmc5cpbs91c9z1q3yrkr4wp48esqjw0jh77518et4sep9imm6ijlou8p7e4xf671i3b7bhxlam4npzjitld69g8jp8a9mfho7f1urxdl20gcklfs19itebodu8rzqp7a1hm18wofm4w7gt3eohjtjfcxko17lqb5xmfgodjokyl09dgxdf4y224pvf2iiwmhwylu3y8xsadod8lnvoyshboogfta2bp3cg17da62mbn6h3wk3xye2dls92djp3wo == \5\6\q\x\e\p\8\s\5\2\l\p\c\4\y\t\i\e\k\n\r\s\u\j\h\4\l\3\j\0\i\9\d\s\t\8\o\l\2\5\2\p\8\p\f\g\e\j\i\c\m\t\l\x\2\z\w\u\x\y\o\v\7\1\l\d\e\t\0\6\h\m\y\y\v\v\q\e\s\w\w\i\o\q\6\r\o\h\2\x\i\4\2\t\z\5\d\f\g\n\o\f\n\1\h\8\l\6\h\3\t\i\z\c\e\5\c\9\e\u\h\o\r\e\v\k\q\f\f\w\1\l\y\g\4\f\5\l\2\c\2\i\s\a\d\0\4\o\b\u\h\3\9\4\j\q\m\d\a\k\b\s\p\v\l\u\m\6\o\h\m\z\f\8\f\6\w\l\w\m\g\o\5\v\o\l\r\e\f\o\h\y\2\d\3\f\c\y\x\p\h\f\1\t\d\2\p\9\o\s\m\0\y\r\8\u\m\n\h\2\n\e\3\f\g\a\1\t\2\o\5\j\z\d\g\w\9\1\m\s\w\c\4\h\4\s\5\6\u\9\5\g\y\d\q\e\k\m\c\5\c\p\b\s\9\1\c\9\z\1\q\3\y\r\k\r\4\w\p\4\8\e\s\q\j\w\0\j\h\7\7\5\1\8\e\t\4\s\e\p\9\i\m\m\6\i\j\l\o\u\8\p\7\e\4\x\f\6\7\1\i\3\b\7\b\h\x\l\a\m\4\n\p\z\j\i\t\l\d\6\9\g\8\j\p\8\a\9\m\f\h\o\7\f\1\u\r\x\d\l\2\0\g\c\k\l\f\s\1\9\i\t\e\b\o\d\u\8\r\z\q\p\7\a\1\h\m\1\8\w\o\f\m\4\w\7\g\t\3\e\o\h\j\t\j\f\c\x\k\o\1\7\l\q\b\5\x\m\f\g\o\d\j\o\k\y\l\0\9\d\g\x\d\f\4\y\2\2\4\p\v\f\2\i\i\w\m\h\w\y\l\u\3\y\8\x\s\a\d\o\d\8\l\n\v\o\y\s\h\b\o\o\g\f\t\a\2\b\p\3\c\g\1\7\d\a\6\2\m\b\n\6\h\3\w\k\3\x\y\e\2\d\l\s\9\2\d\j\p\3\w\o ]] 00:26:14.148 21:48:34 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:14.148 21:48:34 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:14.148 [2024-12-06 21:48:34.486843] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:14.148 [2024-12-06 21:48:34.487015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89276 ] 00:26:14.408 [2024-12-06 21:48:34.654741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.408 [2024-12-06 21:48:34.801131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.667  [2024-12-06T21:48:36.098Z] Copying: 512/512 [B] (average 500 kBps) 00:26:15.601 00:26:15.601 21:48:35 -- dd/posix.sh@93 -- # [[ 56qxep8s52lpc4ytieknrsujh4l3j0i9dst8ol252p8pfgejicmtlx2zwuxyov71ldet06hmyyvvqeswwioq6roh2xi42tz5dfgnofn1h8l6h3tizce5c9euhorevkqffw1lyg4f5l2c2isad04obuh394jqmdakbspvlum6ohmzf8f6wlwmgo5volrefohy2d3fcyxphf1td2p9osm0yr8umnh2ne3fga1t2o5jzdgw91mswc4h4s56u95gydqekmc5cpbs91c9z1q3yrkr4wp48esqjw0jh77518et4sep9imm6ijlou8p7e4xf671i3b7bhxlam4npzjitld69g8jp8a9mfho7f1urxdl20gcklfs19itebodu8rzqp7a1hm18wofm4w7gt3eohjtjfcxko17lqb5xmfgodjokyl09dgxdf4y224pvf2iiwmhwylu3y8xsadod8lnvoyshboogfta2bp3cg17da62mbn6h3wk3xye2dls92djp3wo == \5\6\q\x\e\p\8\s\5\2\l\p\c\4\y\t\i\e\k\n\r\s\u\j\h\4\l\3\j\0\i\9\d\s\t\8\o\l\2\5\2\p\8\p\f\g\e\j\i\c\m\t\l\x\2\z\w\u\x\y\o\v\7\1\l\d\e\t\0\6\h\m\y\y\v\v\q\e\s\w\w\i\o\q\6\r\o\h\2\x\i\4\2\t\z\5\d\f\g\n\o\f\n\1\h\8\l\6\h\3\t\i\z\c\e\5\c\9\e\u\h\o\r\e\v\k\q\f\f\w\1\l\y\g\4\f\5\l\2\c\2\i\s\a\d\0\4\o\b\u\h\3\9\4\j\q\m\d\a\k\b\s\p\v\l\u\m\6\o\h\m\z\f\8\f\6\w\l\w\m\g\o\5\v\o\l\r\e\f\o\h\y\2\d\3\f\c\y\x\p\h\f\1\t\d\2\p\9\o\s\m\0\y\r\8\u\m\n\h\2\n\e\3\f\g\a\1\t\2\o\5\j\z\d\g\w\9\1\m\s\w\c\4\h\4\s\5\6\u\9\5\g\y\d\q\e\k\m\c\5\c\p\b\s\9\1\c\9\z\1\q\3\y\r\k\r\4\w\p\4\8\e\s\q\j\w\0\j\h\7\7\5\1\8\e\t\4\s\e\p\9\i\m\m\6\i\j\l\o\u\8\p\7\e\4\x\f\6\7\1\i\3\b\7\b\h\x\l\a\m\4\n\p\z\j\i\t\l\d\6\9\g\8\j\p\8\a\9\m\f\h\o\7\f\1\u\r\x\d\l\2\0\g\c\k\l\f\s\1\9\i\t\e\b\o\d\u\8\r\z\q\p\7\a\1\h\m\1\8\w\o\f\m\4\w\7\g\t\3\e\o\h\j\t\j\f\c\x\k\o\1\7\l\q\b\5\x\m\f\g\o\d\j\o\k\y\l\0\9\d\g\x\d\f\4\y\2\2\4\p\v\f\2\i\i\w\m\h\w\y\l\u\3\y\8\x\s\a\d\o\d\8\l\n\v\o\y\s\h\b\o\o\g\f\t\a\2\b\p\3\c\g\1\7\d\a\6\2\m\b\n\6\h\3\w\k\3\x\y\e\2\d\l\s\9\2\d\j\p\3\w\o ]] 00:26:15.601 21:48:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:15.601 21:48:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:15.601 [2024-12-06 21:48:35.994711] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:15.601 [2024-12-06 21:48:35.994865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89291 ] 00:26:15.860 [2024-12-06 21:48:36.163469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.860 [2024-12-06 21:48:36.314888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.119  [2024-12-06T21:48:37.551Z] Copying: 512/512 [B] (average 100 kBps) 00:26:17.054 00:26:17.054 21:48:37 -- dd/posix.sh@93 -- # [[ 56qxep8s52lpc4ytieknrsujh4l3j0i9dst8ol252p8pfgejicmtlx2zwuxyov71ldet06hmyyvvqeswwioq6roh2xi42tz5dfgnofn1h8l6h3tizce5c9euhorevkqffw1lyg4f5l2c2isad04obuh394jqmdakbspvlum6ohmzf8f6wlwmgo5volrefohy2d3fcyxphf1td2p9osm0yr8umnh2ne3fga1t2o5jzdgw91mswc4h4s56u95gydqekmc5cpbs91c9z1q3yrkr4wp48esqjw0jh77518et4sep9imm6ijlou8p7e4xf671i3b7bhxlam4npzjitld69g8jp8a9mfho7f1urxdl20gcklfs19itebodu8rzqp7a1hm18wofm4w7gt3eohjtjfcxko17lqb5xmfgodjokyl09dgxdf4y224pvf2iiwmhwylu3y8xsadod8lnvoyshboogfta2bp3cg17da62mbn6h3wk3xye2dls92djp3wo == \5\6\q\x\e\p\8\s\5\2\l\p\c\4\y\t\i\e\k\n\r\s\u\j\h\4\l\3\j\0\i\9\d\s\t\8\o\l\2\5\2\p\8\p\f\g\e\j\i\c\m\t\l\x\2\z\w\u\x\y\o\v\7\1\l\d\e\t\0\6\h\m\y\y\v\v\q\e\s\w\w\i\o\q\6\r\o\h\2\x\i\4\2\t\z\5\d\f\g\n\o\f\n\1\h\8\l\6\h\3\t\i\z\c\e\5\c\9\e\u\h\o\r\e\v\k\q\f\f\w\1\l\y\g\4\f\5\l\2\c\2\i\s\a\d\0\4\o\b\u\h\3\9\4\j\q\m\d\a\k\b\s\p\v\l\u\m\6\o\h\m\z\f\8\f\6\w\l\w\m\g\o\5\v\o\l\r\e\f\o\h\y\2\d\3\f\c\y\x\p\h\f\1\t\d\2\p\9\o\s\m\0\y\r\8\u\m\n\h\2\n\e\3\f\g\a\1\t\2\o\5\j\z\d\g\w\9\1\m\s\w\c\4\h\4\s\5\6\u\9\5\g\y\d\q\e\k\m\c\5\c\p\b\s\9\1\c\9\z\1\q\3\y\r\k\r\4\w\p\4\8\e\s\q\j\w\0\j\h\7\7\5\1\8\e\t\4\s\e\p\9\i\m\m\6\i\j\l\o\u\8\p\7\e\4\x\f\6\7\1\i\3\b\7\b\h\x\l\a\m\4\n\p\z\j\i\t\l\d\6\9\g\8\j\p\8\a\9\m\f\h\o\7\f\1\u\r\x\d\l\2\0\g\c\k\l\f\s\1\9\i\t\e\b\o\d\u\8\r\z\q\p\7\a\1\h\m\1\8\w\o\f\m\4\w\7\g\t\3\e\o\h\j\t\j\f\c\x\k\o\1\7\l\q\b\5\x\m\f\g\o\d\j\o\k\y\l\0\9\d\g\x\d\f\4\y\2\2\4\p\v\f\2\i\i\w\m\h\w\y\l\u\3\y\8\x\s\a\d\o\d\8\l\n\v\o\y\s\h\b\o\o\g\f\t\a\2\b\p\3\c\g\1\7\d\a\6\2\m\b\n\6\h\3\w\k\3\x\y\e\2\d\l\s\9\2\d\j\p\3\w\o ]] 00:26:17.054 21:48:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:17.054 21:48:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:17.054 [2024-12-06 21:48:37.519911] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:17.054 [2024-12-06 21:48:37.520069] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89311 ] 00:26:17.312 [2024-12-06 21:48:37.688957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.570 [2024-12-06 21:48:37.838906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.570  [2024-12-06T21:48:39.007Z] Copying: 512/512 [B] (average 100 kBps) 00:26:18.510 00:26:18.510 21:48:38 -- dd/posix.sh@93 -- # [[ 56qxep8s52lpc4ytieknrsujh4l3j0i9dst8ol252p8pfgejicmtlx2zwuxyov71ldet06hmyyvvqeswwioq6roh2xi42tz5dfgnofn1h8l6h3tizce5c9euhorevkqffw1lyg4f5l2c2isad04obuh394jqmdakbspvlum6ohmzf8f6wlwmgo5volrefohy2d3fcyxphf1td2p9osm0yr8umnh2ne3fga1t2o5jzdgw91mswc4h4s56u95gydqekmc5cpbs91c9z1q3yrkr4wp48esqjw0jh77518et4sep9imm6ijlou8p7e4xf671i3b7bhxlam4npzjitld69g8jp8a9mfho7f1urxdl20gcklfs19itebodu8rzqp7a1hm18wofm4w7gt3eohjtjfcxko17lqb5xmfgodjokyl09dgxdf4y224pvf2iiwmhwylu3y8xsadod8lnvoyshboogfta2bp3cg17da62mbn6h3wk3xye2dls92djp3wo == \5\6\q\x\e\p\8\s\5\2\l\p\c\4\y\t\i\e\k\n\r\s\u\j\h\4\l\3\j\0\i\9\d\s\t\8\o\l\2\5\2\p\8\p\f\g\e\j\i\c\m\t\l\x\2\z\w\u\x\y\o\v\7\1\l\d\e\t\0\6\h\m\y\y\v\v\q\e\s\w\w\i\o\q\6\r\o\h\2\x\i\4\2\t\z\5\d\f\g\n\o\f\n\1\h\8\l\6\h\3\t\i\z\c\e\5\c\9\e\u\h\o\r\e\v\k\q\f\f\w\1\l\y\g\4\f\5\l\2\c\2\i\s\a\d\0\4\o\b\u\h\3\9\4\j\q\m\d\a\k\b\s\p\v\l\u\m\6\o\h\m\z\f\8\f\6\w\l\w\m\g\o\5\v\o\l\r\e\f\o\h\y\2\d\3\f\c\y\x\p\h\f\1\t\d\2\p\9\o\s\m\0\y\r\8\u\m\n\h\2\n\e\3\f\g\a\1\t\2\o\5\j\z\d\g\w\9\1\m\s\w\c\4\h\4\s\5\6\u\9\5\g\y\d\q\e\k\m\c\5\c\p\b\s\9\1\c\9\z\1\q\3\y\r\k\r\4\w\p\4\8\e\s\q\j\w\0\j\h\7\7\5\1\8\e\t\4\s\e\p\9\i\m\m\6\i\j\l\o\u\8\p\7\e\4\x\f\6\7\1\i\3\b\7\b\h\x\l\a\m\4\n\p\z\j\i\t\l\d\6\9\g\8\j\p\8\a\9\m\f\h\o\7\f\1\u\r\x\d\l\2\0\g\c\k\l\f\s\1\9\i\t\e\b\o\d\u\8\r\z\q\p\7\a\1\h\m\1\8\w\o\f\m\4\w\7\g\t\3\e\o\h\j\t\j\f\c\x\k\o\1\7\l\q\b\5\x\m\f\g\o\d\j\o\k\y\l\0\9\d\g\x\d\f\4\y\2\2\4\p\v\f\2\i\i\w\m\h\w\y\l\u\3\y\8\x\s\a\d\o\d\8\l\n\v\o\y\s\h\b\o\o\g\f\t\a\2\b\p\3\c\g\1\7\d\a\6\2\m\b\n\6\h\3\w\k\3\x\y\e\2\d\l\s\9\2\d\j\p\3\w\o ]] 00:26:18.510 21:48:38 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:26:18.510 21:48:38 -- dd/posix.sh@86 -- # gen_bytes 512 00:26:18.510 21:48:38 -- dd/common.sh@98 -- # xtrace_disable 00:26:18.510 21:48:38 -- common/autotest_common.sh@10 -- # set +x 00:26:18.510 21:48:38 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:18.510 21:48:38 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:26:18.770 [2024-12-06 21:48:39.048547] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:18.770 [2024-12-06 21:48:39.048722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89325 ] 00:26:18.770 [2024-12-06 21:48:39.216519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:19.028 [2024-12-06 21:48:39.373587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.286  [2024-12-06T21:48:40.716Z] Copying: 512/512 [B] (average 500 kBps) 00:26:20.219 00:26:20.219 21:48:40 -- dd/posix.sh@93 -- # [[ uyajcupzkuncs9z12vsbt00ttb435yctsr0w72iks6uqpkdpw82w8arcpmjb7thi2jisc9dvhho6cij24fmz8ne01syb3koc2qcwulgqqh8oq78daj6pg6lppdvwnomtxqpuh6zrfv97dsn67rnau5lbhwom9vzgnlz7y948rynz2w81nuqwvvgepeac9241384yifu0h2qsvqhcqxzlzbr2nkdpbxrda9ecf4zby71tguqiyjb5mkvnkvyi2w7n66qgm4gllwf40g10kaegempuipahwl7jsw3k7r0j94we4r0i163ajtbt6eht9ouzjkiajycjukd3ud5nitjglj7un2dpfcv8dsm8zm9joftm9eycc2u4rgbfcusudhku73jjphjg0fgyz87tc2xfz2qfdb8th9ncreupqiz1kcej4zi96p43ee3jpkwcmd66h39b4fx4dwvda7fpjm5yawnggdkw8fxj0er9up7k3u3wvrs5pcsyjom2dqq3wc8v == \u\y\a\j\c\u\p\z\k\u\n\c\s\9\z\1\2\v\s\b\t\0\0\t\t\b\4\3\5\y\c\t\s\r\0\w\7\2\i\k\s\6\u\q\p\k\d\p\w\8\2\w\8\a\r\c\p\m\j\b\7\t\h\i\2\j\i\s\c\9\d\v\h\h\o\6\c\i\j\2\4\f\m\z\8\n\e\0\1\s\y\b\3\k\o\c\2\q\c\w\u\l\g\q\q\h\8\o\q\7\8\d\a\j\6\p\g\6\l\p\p\d\v\w\n\o\m\t\x\q\p\u\h\6\z\r\f\v\9\7\d\s\n\6\7\r\n\a\u\5\l\b\h\w\o\m\9\v\z\g\n\l\z\7\y\9\4\8\r\y\n\z\2\w\8\1\n\u\q\w\v\v\g\e\p\e\a\c\9\2\4\1\3\8\4\y\i\f\u\0\h\2\q\s\v\q\h\c\q\x\z\l\z\b\r\2\n\k\d\p\b\x\r\d\a\9\e\c\f\4\z\b\y\7\1\t\g\u\q\i\y\j\b\5\m\k\v\n\k\v\y\i\2\w\7\n\6\6\q\g\m\4\g\l\l\w\f\4\0\g\1\0\k\a\e\g\e\m\p\u\i\p\a\h\w\l\7\j\s\w\3\k\7\r\0\j\9\4\w\e\4\r\0\i\1\6\3\a\j\t\b\t\6\e\h\t\9\o\u\z\j\k\i\a\j\y\c\j\u\k\d\3\u\d\5\n\i\t\j\g\l\j\7\u\n\2\d\p\f\c\v\8\d\s\m\8\z\m\9\j\o\f\t\m\9\e\y\c\c\2\u\4\r\g\b\f\c\u\s\u\d\h\k\u\7\3\j\j\p\h\j\g\0\f\g\y\z\8\7\t\c\2\x\f\z\2\q\f\d\b\8\t\h\9\n\c\r\e\u\p\q\i\z\1\k\c\e\j\4\z\i\9\6\p\4\3\e\e\3\j\p\k\w\c\m\d\6\6\h\3\9\b\4\f\x\4\d\w\v\d\a\7\f\p\j\m\5\y\a\w\n\g\g\d\k\w\8\f\x\j\0\e\r\9\u\p\7\k\3\u\3\w\v\r\s\5\p\c\s\y\j\o\m\2\d\q\q\3\w\c\8\v ]] 00:26:20.219 21:48:40 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:20.219 21:48:40 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:26:20.219 [2024-12-06 21:48:40.564313] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:20.219 [2024-12-06 21:48:40.564497] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89349 ] 00:26:20.477 [2024-12-06 21:48:40.734383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.477 [2024-12-06 21:48:40.883969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.735  [2024-12-06T21:48:42.167Z] Copying: 512/512 [B] (average 500 kBps) 00:26:21.670 00:26:21.670 21:48:42 -- dd/posix.sh@93 -- # [[ uyajcupzkuncs9z12vsbt00ttb435yctsr0w72iks6uqpkdpw82w8arcpmjb7thi2jisc9dvhho6cij24fmz8ne01syb3koc2qcwulgqqh8oq78daj6pg6lppdvwnomtxqpuh6zrfv97dsn67rnau5lbhwom9vzgnlz7y948rynz2w81nuqwvvgepeac9241384yifu0h2qsvqhcqxzlzbr2nkdpbxrda9ecf4zby71tguqiyjb5mkvnkvyi2w7n66qgm4gllwf40g10kaegempuipahwl7jsw3k7r0j94we4r0i163ajtbt6eht9ouzjkiajycjukd3ud5nitjglj7un2dpfcv8dsm8zm9joftm9eycc2u4rgbfcusudhku73jjphjg0fgyz87tc2xfz2qfdb8th9ncreupqiz1kcej4zi96p43ee3jpkwcmd66h39b4fx4dwvda7fpjm5yawnggdkw8fxj0er9up7k3u3wvrs5pcsyjom2dqq3wc8v == \u\y\a\j\c\u\p\z\k\u\n\c\s\9\z\1\2\v\s\b\t\0\0\t\t\b\4\3\5\y\c\t\s\r\0\w\7\2\i\k\s\6\u\q\p\k\d\p\w\8\2\w\8\a\r\c\p\m\j\b\7\t\h\i\2\j\i\s\c\9\d\v\h\h\o\6\c\i\j\2\4\f\m\z\8\n\e\0\1\s\y\b\3\k\o\c\2\q\c\w\u\l\g\q\q\h\8\o\q\7\8\d\a\j\6\p\g\6\l\p\p\d\v\w\n\o\m\t\x\q\p\u\h\6\z\r\f\v\9\7\d\s\n\6\7\r\n\a\u\5\l\b\h\w\o\m\9\v\z\g\n\l\z\7\y\9\4\8\r\y\n\z\2\w\8\1\n\u\q\w\v\v\g\e\p\e\a\c\9\2\4\1\3\8\4\y\i\f\u\0\h\2\q\s\v\q\h\c\q\x\z\l\z\b\r\2\n\k\d\p\b\x\r\d\a\9\e\c\f\4\z\b\y\7\1\t\g\u\q\i\y\j\b\5\m\k\v\n\k\v\y\i\2\w\7\n\6\6\q\g\m\4\g\l\l\w\f\4\0\g\1\0\k\a\e\g\e\m\p\u\i\p\a\h\w\l\7\j\s\w\3\k\7\r\0\j\9\4\w\e\4\r\0\i\1\6\3\a\j\t\b\t\6\e\h\t\9\o\u\z\j\k\i\a\j\y\c\j\u\k\d\3\u\d\5\n\i\t\j\g\l\j\7\u\n\2\d\p\f\c\v\8\d\s\m\8\z\m\9\j\o\f\t\m\9\e\y\c\c\2\u\4\r\g\b\f\c\u\s\u\d\h\k\u\7\3\j\j\p\h\j\g\0\f\g\y\z\8\7\t\c\2\x\f\z\2\q\f\d\b\8\t\h\9\n\c\r\e\u\p\q\i\z\1\k\c\e\j\4\z\i\9\6\p\4\3\e\e\3\j\p\k\w\c\m\d\6\6\h\3\9\b\4\f\x\4\d\w\v\d\a\7\f\p\j\m\5\y\a\w\n\g\g\d\k\w\8\f\x\j\0\e\r\9\u\p\7\k\3\u\3\w\v\r\s\5\p\c\s\y\j\o\m\2\d\q\q\3\w\c\8\v ]] 00:26:21.670 21:48:42 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:21.670 21:48:42 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:26:21.670 [2024-12-06 21:48:42.075805] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:21.670 [2024-12-06 21:48:42.075955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89364 ] 00:26:21.929 [2024-12-06 21:48:42.246190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.929 [2024-12-06 21:48:42.410612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.188  [2024-12-06T21:48:43.620Z] Copying: 512/512 [B] (average 100 kBps) 00:26:23.123 00:26:23.123 21:48:43 -- dd/posix.sh@93 -- # [[ uyajcupzkuncs9z12vsbt00ttb435yctsr0w72iks6uqpkdpw82w8arcpmjb7thi2jisc9dvhho6cij24fmz8ne01syb3koc2qcwulgqqh8oq78daj6pg6lppdvwnomtxqpuh6zrfv97dsn67rnau5lbhwom9vzgnlz7y948rynz2w81nuqwvvgepeac9241384yifu0h2qsvqhcqxzlzbr2nkdpbxrda9ecf4zby71tguqiyjb5mkvnkvyi2w7n66qgm4gllwf40g10kaegempuipahwl7jsw3k7r0j94we4r0i163ajtbt6eht9ouzjkiajycjukd3ud5nitjglj7un2dpfcv8dsm8zm9joftm9eycc2u4rgbfcusudhku73jjphjg0fgyz87tc2xfz2qfdb8th9ncreupqiz1kcej4zi96p43ee3jpkwcmd66h39b4fx4dwvda7fpjm5yawnggdkw8fxj0er9up7k3u3wvrs5pcsyjom2dqq3wc8v == \u\y\a\j\c\u\p\z\k\u\n\c\s\9\z\1\2\v\s\b\t\0\0\t\t\b\4\3\5\y\c\t\s\r\0\w\7\2\i\k\s\6\u\q\p\k\d\p\w\8\2\w\8\a\r\c\p\m\j\b\7\t\h\i\2\j\i\s\c\9\d\v\h\h\o\6\c\i\j\2\4\f\m\z\8\n\e\0\1\s\y\b\3\k\o\c\2\q\c\w\u\l\g\q\q\h\8\o\q\7\8\d\a\j\6\p\g\6\l\p\p\d\v\w\n\o\m\t\x\q\p\u\h\6\z\r\f\v\9\7\d\s\n\6\7\r\n\a\u\5\l\b\h\w\o\m\9\v\z\g\n\l\z\7\y\9\4\8\r\y\n\z\2\w\8\1\n\u\q\w\v\v\g\e\p\e\a\c\9\2\4\1\3\8\4\y\i\f\u\0\h\2\q\s\v\q\h\c\q\x\z\l\z\b\r\2\n\k\d\p\b\x\r\d\a\9\e\c\f\4\z\b\y\7\1\t\g\u\q\i\y\j\b\5\m\k\v\n\k\v\y\i\2\w\7\n\6\6\q\g\m\4\g\l\l\w\f\4\0\g\1\0\k\a\e\g\e\m\p\u\i\p\a\h\w\l\7\j\s\w\3\k\7\r\0\j\9\4\w\e\4\r\0\i\1\6\3\a\j\t\b\t\6\e\h\t\9\o\u\z\j\k\i\a\j\y\c\j\u\k\d\3\u\d\5\n\i\t\j\g\l\j\7\u\n\2\d\p\f\c\v\8\d\s\m\8\z\m\9\j\o\f\t\m\9\e\y\c\c\2\u\4\r\g\b\f\c\u\s\u\d\h\k\u\7\3\j\j\p\h\j\g\0\f\g\y\z\8\7\t\c\2\x\f\z\2\q\f\d\b\8\t\h\9\n\c\r\e\u\p\q\i\z\1\k\c\e\j\4\z\i\9\6\p\4\3\e\e\3\j\p\k\w\c\m\d\6\6\h\3\9\b\4\f\x\4\d\w\v\d\a\7\f\p\j\m\5\y\a\w\n\g\g\d\k\w\8\f\x\j\0\e\r\9\u\p\7\k\3\u\3\w\v\r\s\5\p\c\s\y\j\o\m\2\d\q\q\3\w\c\8\v ]] 00:26:23.123 21:48:43 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:26:23.123 21:48:43 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:26:23.123 [2024-12-06 21:48:43.612896] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:23.123 [2024-12-06 21:48:43.613058] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89384 ] 00:26:23.382 [2024-12-06 21:48:43.783561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:23.641 [2024-12-06 21:48:43.933321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:23.899  [2024-12-06T21:48:45.342Z] Copying: 512/512 [B] (average 125 kBps) 00:26:24.845 00:26:24.845 ************************************ 00:26:24.845 END TEST dd_flags_misc_forced_aio 00:26:24.845 ************************************ 00:26:24.845 21:48:45 -- dd/posix.sh@93 -- # [[ uyajcupzkuncs9z12vsbt00ttb435yctsr0w72iks6uqpkdpw82w8arcpmjb7thi2jisc9dvhho6cij24fmz8ne01syb3koc2qcwulgqqh8oq78daj6pg6lppdvwnomtxqpuh6zrfv97dsn67rnau5lbhwom9vzgnlz7y948rynz2w81nuqwvvgepeac9241384yifu0h2qsvqhcqxzlzbr2nkdpbxrda9ecf4zby71tguqiyjb5mkvnkvyi2w7n66qgm4gllwf40g10kaegempuipahwl7jsw3k7r0j94we4r0i163ajtbt6eht9ouzjkiajycjukd3ud5nitjglj7un2dpfcv8dsm8zm9joftm9eycc2u4rgbfcusudhku73jjphjg0fgyz87tc2xfz2qfdb8th9ncreupqiz1kcej4zi96p43ee3jpkwcmd66h39b4fx4dwvda7fpjm5yawnggdkw8fxj0er9up7k3u3wvrs5pcsyjom2dqq3wc8v == \u\y\a\j\c\u\p\z\k\u\n\c\s\9\z\1\2\v\s\b\t\0\0\t\t\b\4\3\5\y\c\t\s\r\0\w\7\2\i\k\s\6\u\q\p\k\d\p\w\8\2\w\8\a\r\c\p\m\j\b\7\t\h\i\2\j\i\s\c\9\d\v\h\h\o\6\c\i\j\2\4\f\m\z\8\n\e\0\1\s\y\b\3\k\o\c\2\q\c\w\u\l\g\q\q\h\8\o\q\7\8\d\a\j\6\p\g\6\l\p\p\d\v\w\n\o\m\t\x\q\p\u\h\6\z\r\f\v\9\7\d\s\n\6\7\r\n\a\u\5\l\b\h\w\o\m\9\v\z\g\n\l\z\7\y\9\4\8\r\y\n\z\2\w\8\1\n\u\q\w\v\v\g\e\p\e\a\c\9\2\4\1\3\8\4\y\i\f\u\0\h\2\q\s\v\q\h\c\q\x\z\l\z\b\r\2\n\k\d\p\b\x\r\d\a\9\e\c\f\4\z\b\y\7\1\t\g\u\q\i\y\j\b\5\m\k\v\n\k\v\y\i\2\w\7\n\6\6\q\g\m\4\g\l\l\w\f\4\0\g\1\0\k\a\e\g\e\m\p\u\i\p\a\h\w\l\7\j\s\w\3\k\7\r\0\j\9\4\w\e\4\r\0\i\1\6\3\a\j\t\b\t\6\e\h\t\9\o\u\z\j\k\i\a\j\y\c\j\u\k\d\3\u\d\5\n\i\t\j\g\l\j\7\u\n\2\d\p\f\c\v\8\d\s\m\8\z\m\9\j\o\f\t\m\9\e\y\c\c\2\u\4\r\g\b\f\c\u\s\u\d\h\k\u\7\3\j\j\p\h\j\g\0\f\g\y\z\8\7\t\c\2\x\f\z\2\q\f\d\b\8\t\h\9\n\c\r\e\u\p\q\i\z\1\k\c\e\j\4\z\i\9\6\p\4\3\e\e\3\j\p\k\w\c\m\d\6\6\h\3\9\b\4\f\x\4\d\w\v\d\a\7\f\p\j\m\5\y\a\w\n\g\g\d\k\w\8\f\x\j\0\e\r\9\u\p\7\k\3\u\3\w\v\r\s\5\p\c\s\y\j\o\m\2\d\q\q\3\w\c\8\v ]] 00:26:24.845 00:26:24.846 real 0m12.122s 00:26:24.846 user 0m9.695s 00:26:24.846 sys 0m1.493s 00:26:24.846 21:48:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.846 21:48:45 -- common/autotest_common.sh@10 -- # set +x 00:26:24.846 21:48:45 -- dd/posix.sh@1 -- # cleanup 00:26:24.846 21:48:45 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:26:24.846 21:48:45 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:26:24.846 ************************************ 00:26:24.846 END TEST spdk_dd_posix 00:26:24.846 ************************************ 00:26:24.846 00:26:24.846 real 0m51.187s 00:26:24.846 user 0m39.108s 00:26:24.846 sys 0m6.461s 00:26:24.846 21:48:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:24.846 21:48:45 -- common/autotest_common.sh@10 -- # set +x 00:26:24.846 21:48:45 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:24.846 21:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.846 21:48:45 -- common/autotest_common.sh@10 -- # set +x 00:26:24.846 ************************************ 00:26:24.846 START TEST spdk_dd_malloc 00:26:24.846 ************************************ 00:26:24.846 21:48:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:26:24.846 * Looking for test storage... 00:26:24.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:24.846 21:48:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:24.846 21:48:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:24.846 21:48:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:24.846 21:48:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:24.846 21:48:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:24.846 21:48:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:24.846 21:48:45 -- scripts/common.sh@335 -- # IFS=.-: 00:26:24.846 21:48:45 -- scripts/common.sh@335 -- # read -ra ver1 00:26:24.846 21:48:45 -- scripts/common.sh@336 -- # IFS=.-: 00:26:24.846 21:48:45 -- scripts/common.sh@336 -- # read -ra ver2 00:26:24.846 21:48:45 -- scripts/common.sh@337 -- # local 'op=<' 00:26:24.846 21:48:45 -- scripts/common.sh@339 -- # ver1_l=2 00:26:24.846 21:48:45 -- scripts/common.sh@340 -- # ver2_l=1 00:26:24.846 21:48:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:24.846 21:48:45 -- scripts/common.sh@343 -- # case "$op" in 00:26:24.846 21:48:45 -- scripts/common.sh@344 -- # : 1 00:26:24.846 21:48:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:24.846 21:48:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:24.846 21:48:45 -- scripts/common.sh@364 -- # decimal 1 00:26:24.846 21:48:45 -- scripts/common.sh@352 -- # local d=1 00:26:24.846 21:48:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:24.846 21:48:45 -- scripts/common.sh@354 -- # echo 1 00:26:24.846 21:48:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:24.846 21:48:45 -- scripts/common.sh@365 -- # decimal 2 00:26:24.846 21:48:45 -- scripts/common.sh@352 -- # local d=2 00:26:24.846 21:48:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:24.846 21:48:45 -- scripts/common.sh@354 -- # echo 2 00:26:24.846 21:48:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:24.846 21:48:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:24.846 21:48:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:24.846 21:48:45 -- scripts/common.sh@367 -- # return 0 00:26:24.846 21:48:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.846 --rc genhtml_branch_coverage=1 00:26:24.846 --rc genhtml_function_coverage=1 00:26:24.846 --rc genhtml_legend=1 00:26:24.846 --rc geninfo_all_blocks=1 00:26:24.846 --rc geninfo_unexecuted_blocks=1 00:26:24.846 00:26:24.846 ' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.846 --rc genhtml_branch_coverage=1 00:26:24.846 --rc genhtml_function_coverage=1 00:26:24.846 --rc genhtml_legend=1 00:26:24.846 --rc geninfo_all_blocks=1 00:26:24.846 --rc geninfo_unexecuted_blocks=1 00:26:24.846 00:26:24.846 ' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.846 --rc genhtml_branch_coverage=1 00:26:24.846 --rc genhtml_function_coverage=1 00:26:24.846 --rc genhtml_legend=1 00:26:24.846 --rc geninfo_all_blocks=1 00:26:24.846 --rc geninfo_unexecuted_blocks=1 00:26:24.846 00:26:24.846 ' 00:26:24.846 21:48:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:24.846 --rc genhtml_branch_coverage=1 00:26:24.846 --rc genhtml_function_coverage=1 00:26:24.846 --rc genhtml_legend=1 00:26:24.846 --rc geninfo_all_blocks=1 00:26:24.846 --rc geninfo_unexecuted_blocks=1 00:26:24.846 00:26:24.846 ' 00:26:24.846 21:48:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.104 21:48:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.104 21:48:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.104 21:48:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.105 21:48:45 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:25.105 21:48:45 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:25.105 21:48:45 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:25.105 21:48:45 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:25.105 21:48:45 -- paths/export.sh@6 -- # export PATH 00:26:25.105 21:48:45 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:25.105 21:48:45 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:26:25.105 21:48:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:25.105 21:48:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:25.105 21:48:45 -- common/autotest_common.sh@10 -- # set +x 00:26:25.105 ************************************ 00:26:25.105 START TEST dd_malloc_copy 00:26:25.105 ************************************ 00:26:25.105 21:48:45 -- common/autotest_common.sh@1114 -- # malloc_copy 00:26:25.105 21:48:45 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:26:25.105 21:48:45 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:26:25.105 21:48:45 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:26:25.105 21:48:45 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:26:25.105 21:48:45 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:26:25.105 21:48:45 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:26:25.105 21:48:45 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:26:25.105 21:48:45 -- dd/malloc.sh@28 -- # gen_conf 00:26:25.105 21:48:45 -- dd/common.sh@31 -- # xtrace_disable 00:26:25.105 21:48:45 -- common/autotest_common.sh@10 -- # set +x 00:26:25.105 { 00:26:25.105 "subsystems": [ 00:26:25.105 { 00:26:25.105 "subsystem": "bdev", 00:26:25.105 "config": [ 00:26:25.105 { 00:26:25.105 "params": { 00:26:25.105 "block_size": 512, 00:26:25.105 "num_blocks": 1048576, 00:26:25.105 "name": "malloc0" 00:26:25.105 }, 00:26:25.105 "method": "bdev_malloc_create" 00:26:25.105 }, 00:26:25.105 { 00:26:25.105 "params": { 00:26:25.105 "block_size": 512, 00:26:25.105 "num_blocks": 1048576, 00:26:25.105 "name": "malloc1" 00:26:25.105 }, 00:26:25.105 "method": "bdev_malloc_create" 00:26:25.105 }, 00:26:25.105 { 00:26:25.105 "method": "bdev_wait_for_examine" 00:26:25.105 } 00:26:25.105 ] 00:26:25.105 } 00:26:25.105 ] 00:26:25.105 } 00:26:25.105 [2024-12-06 21:48:45.411855] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:25.105 [2024-12-06 21:48:45.412016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89472 ] 00:26:25.105 [2024-12-06 21:48:45.582017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.363 [2024-12-06 21:48:45.776899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.264  [2024-12-06T21:48:49.136Z] Copying: 212/512 [MB] (212 MBps) [2024-12-06T21:48:49.395Z] Copying: 425/512 [MB] (212 MBps) [2024-12-06T21:48:52.680Z] Copying: 512/512 [MB] (average 213 MBps) 00:26:32.183 00:26:32.183 21:48:51 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:26:32.183 21:48:51 -- dd/malloc.sh@33 -- # gen_conf 00:26:32.183 21:48:51 -- dd/common.sh@31 -- # xtrace_disable 00:26:32.183 21:48:51 -- common/autotest_common.sh@10 -- # set +x 00:26:32.183 { 00:26:32.183 "subsystems": [ 00:26:32.183 { 00:26:32.183 "subsystem": "bdev", 00:26:32.183 "config": [ 00:26:32.183 { 00:26:32.183 "params": { 00:26:32.183 "block_size": 512, 00:26:32.183 "num_blocks": 1048576, 00:26:32.183 "name": "malloc0" 00:26:32.183 }, 00:26:32.183 "method": "bdev_malloc_create" 00:26:32.183 }, 00:26:32.183 { 00:26:32.183 "params": { 00:26:32.183 "block_size": 512, 00:26:32.183 "num_blocks": 1048576, 00:26:32.183 "name": "malloc1" 00:26:32.183 }, 00:26:32.183 "method": "bdev_malloc_create" 00:26:32.183 }, 00:26:32.183 { 00:26:32.183 "method": "bdev_wait_for_examine" 00:26:32.183 } 00:26:32.183 ] 00:26:32.183 } 00:26:32.183 ] 00:26:32.183 } 00:26:32.183 [2024-12-06 21:48:52.039743] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:32.183 [2024-12-06 21:48:52.039887] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89548 ] 00:26:32.183 [2024-12-06 21:48:52.206814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.183 [2024-12-06 21:48:52.362042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.086  [2024-12-06T21:48:55.518Z] Copying: 214/512 [MB] (214 MBps) [2024-12-06T21:48:55.776Z] Copying: 430/512 [MB] (216 MBps) [2024-12-06T21:48:59.058Z] Copying: 512/512 [MB] (average 214 MBps) 00:26:38.561 00:26:38.561 00:26:38.561 real 0m13.184s 00:26:38.561 user 0m12.014s 00:26:38.561 sys 0m0.965s 00:26:38.561 21:48:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:38.561 21:48:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.561 ************************************ 00:26:38.561 END TEST dd_malloc_copy 00:26:38.561 ************************************ 00:26:38.561 ************************************ 00:26:38.561 END TEST spdk_dd_malloc 00:26:38.561 ************************************ 00:26:38.561 00:26:38.561 real 0m13.416s 00:26:38.561 user 0m12.145s 00:26:38.561 sys 0m1.076s 00:26:38.561 21:48:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:38.561 21:48:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.561 21:48:58 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:38.561 21:48:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:38.561 21:48:58 -- common/autotest_common.sh@10 -- # set +x 00:26:38.561 ************************************ 00:26:38.561 START TEST spdk_dd_bdev_to_bdev 00:26:38.561 ************************************ 00:26:38.561 21:48:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:26:38.561 * Looking for test storage... 00:26:38.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:38.561 21:48:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:38.561 21:48:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:38.561 21:48:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:38.561 21:48:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:38.561 21:48:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:38.561 21:48:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:38.561 21:48:58 -- scripts/common.sh@335 -- # IFS=.-: 00:26:38.561 21:48:58 -- scripts/common.sh@335 -- # read -ra ver1 00:26:38.561 21:48:58 -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.561 21:48:58 -- scripts/common.sh@336 -- # read -ra ver2 00:26:38.561 21:48:58 -- scripts/common.sh@337 -- # local 'op=<' 00:26:38.561 21:48:58 -- scripts/common.sh@339 -- # ver1_l=2 00:26:38.561 21:48:58 -- scripts/common.sh@340 -- # ver2_l=1 00:26:38.561 21:48:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:38.561 21:48:58 -- scripts/common.sh@343 -- # case "$op" in 00:26:38.561 21:48:58 -- scripts/common.sh@344 -- # : 1 00:26:38.561 21:48:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:38.561 21:48:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.561 21:48:58 -- scripts/common.sh@364 -- # decimal 1 00:26:38.561 21:48:58 -- scripts/common.sh@352 -- # local d=1 00:26:38.561 21:48:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.561 21:48:58 -- scripts/common.sh@354 -- # echo 1 00:26:38.561 21:48:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:38.561 21:48:58 -- scripts/common.sh@365 -- # decimal 2 00:26:38.561 21:48:58 -- scripts/common.sh@352 -- # local d=2 00:26:38.561 21:48:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.561 21:48:58 -- scripts/common.sh@354 -- # echo 2 00:26:38.561 21:48:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:38.561 21:48:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:38.561 21:48:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:38.561 21:48:58 -- scripts/common.sh@367 -- # return 0 00:26:38.561 21:48:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.561 --rc genhtml_branch_coverage=1 00:26:38.561 --rc genhtml_function_coverage=1 00:26:38.561 --rc genhtml_legend=1 00:26:38.561 --rc geninfo_all_blocks=1 00:26:38.561 --rc geninfo_unexecuted_blocks=1 00:26:38.561 00:26:38.561 ' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.561 --rc genhtml_branch_coverage=1 00:26:38.561 --rc genhtml_function_coverage=1 00:26:38.561 --rc genhtml_legend=1 00:26:38.561 --rc geninfo_all_blocks=1 00:26:38.561 --rc geninfo_unexecuted_blocks=1 00:26:38.561 00:26:38.561 ' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.561 --rc genhtml_branch_coverage=1 00:26:38.561 --rc genhtml_function_coverage=1 00:26:38.561 --rc genhtml_legend=1 00:26:38.561 --rc geninfo_all_blocks=1 00:26:38.561 --rc geninfo_unexecuted_blocks=1 00:26:38.561 00:26:38.561 ' 00:26:38.561 21:48:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:38.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.561 --rc genhtml_branch_coverage=1 00:26:38.561 --rc genhtml_function_coverage=1 00:26:38.561 --rc genhtml_legend=1 00:26:38.561 --rc geninfo_all_blocks=1 00:26:38.561 --rc geninfo_unexecuted_blocks=1 00:26:38.561 00:26:38.561 ' 00:26:38.561 21:48:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.561 21:48:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.561 21:48:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.561 21:48:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.561 21:48:58 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.561 21:48:58 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.561 21:48:58 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.561 21:48:58 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.561 21:48:58 -- paths/export.sh@6 -- # export PATH 00:26:38.561 21:48:58 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:38.561 21:48:58 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:26:38.561 21:48:58 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:26:38.561 21:48:58 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096') 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:26:38.562 21:48:58 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:26:38.562 [2024-12-06 21:48:58.872660] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:38.562 [2024-12-06 21:48:58.872809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89687 ] 00:26:38.562 [2024-12-06 21:48:59.033131] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.822 [2024-12-06 21:48:59.189036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.097  [2024-12-06T21:49:00.568Z] Copying: 256/256 [MB] (average 1910 MBps) 00:26:40.071 00:26:40.071 21:49:00 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:40.071 21:49:00 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:40.071 21:49:00 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:26:40.071 21:49:00 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:26:40.071 21:49:00 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:40.071 21:49:00 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:26:40.071 21:49:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:40.071 21:49:00 -- common/autotest_common.sh@10 -- # set +x 00:26:40.071 ************************************ 00:26:40.071 START TEST dd_inflate_file 00:26:40.071 ************************************ 00:26:40.071 21:49:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:26:40.071 [2024-12-06 21:49:00.551748] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:40.071 [2024-12-06 21:49:00.552079] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89713 ] 00:26:40.330 [2024-12-06 21:49:00.722164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:40.588 [2024-12-06 21:49:00.869927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.847  [2024-12-06T21:49:02.280Z] Copying: 64/64 [MB] (average 1684 MBps) 00:26:41.783 00:26:41.783 00:26:41.783 ************************************ 00:26:41.783 END TEST dd_inflate_file 00:26:41.783 ************************************ 00:26:41.783 real 0m1.552s 00:26:41.783 user 0m1.193s 00:26:41.783 sys 0m0.238s 00:26:41.783 21:49:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:41.783 21:49:02 -- common/autotest_common.sh@10 -- # set +x 00:26:41.783 21:49:02 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:26:41.783 21:49:02 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:26:41.783 21:49:02 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:41.783 21:49:02 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:26:41.783 21:49:02 -- dd/common.sh@31 -- # xtrace_disable 00:26:41.783 21:49:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:26:41.783 21:49:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:41.783 21:49:02 -- common/autotest_common.sh@10 -- # set +x 00:26:41.783 21:49:02 -- common/autotest_common.sh@10 -- # set +x 00:26:41.783 ************************************ 00:26:41.783 START TEST dd_copy_to_out_bdev 00:26:41.783 ************************************ 00:26:41.783 21:49:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:26:41.783 { 00:26:41.783 "subsystems": [ 00:26:41.783 { 00:26:41.783 "subsystem": "bdev", 00:26:41.783 "config": [ 00:26:41.783 { 00:26:41.783 "params": { 00:26:41.783 "block_size": 4096, 00:26:41.783 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:41.783 "name": "aio1" 00:26:41.783 }, 00:26:41.783 "method": "bdev_aio_create" 00:26:41.783 }, 00:26:41.783 { 00:26:41.783 "params": { 00:26:41.783 "trtype": "pcie", 00:26:41.783 "traddr": "0000:00:06.0", 00:26:41.783 "name": "Nvme0" 00:26:41.783 }, 00:26:41.783 "method": "bdev_nvme_attach_controller" 00:26:41.783 }, 00:26:41.783 { 00:26:41.783 "method": "bdev_wait_for_examine" 00:26:41.783 } 00:26:41.783 ] 00:26:41.783 } 00:26:41.783 ] 00:26:41.783 } 00:26:41.783 [2024-12-06 21:49:02.166100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:41.783 [2024-12-06 21:49:02.166258] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89752 ] 00:26:42.042 [2024-12-06 21:49:02.335320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.042 [2024-12-06 21:49:02.490442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.417  [2024-12-06T21:49:04.490Z] Copying: 38/64 [MB] (38 MBps) [2024-12-06T21:49:05.424Z] Copying: 64/64 [MB] (average 39 MBps) 00:26:44.927 00:26:44.927 00:26:44.927 real 0m3.186s 00:26:44.927 user 0m2.818s 00:26:44.927 sys 0m0.262s 00:26:44.927 21:49:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:44.927 21:49:05 -- common/autotest_common.sh@10 -- # set +x 00:26:44.927 ************************************ 00:26:44.927 END TEST dd_copy_to_out_bdev 00:26:44.927 ************************************ 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:26:44.927 21:49:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:44.927 21:49:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:44.927 21:49:05 -- common/autotest_common.sh@10 -- # set +x 00:26:44.927 ************************************ 00:26:44.927 START TEST dd_offset_magic 00:26:44.927 ************************************ 00:26:44.927 21:49:05 -- common/autotest_common.sh@1114 -- # offset_magic 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:26:44.927 21:49:05 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:44.927 21:49:05 -- dd/common.sh@31 -- # xtrace_disable 00:26:44.927 21:49:05 -- common/autotest_common.sh@10 -- # set +x 00:26:44.927 { 00:26:44.927 "subsystems": [ 00:26:44.927 { 00:26:44.927 "subsystem": "bdev", 00:26:44.927 "config": [ 00:26:44.927 { 00:26:44.927 "params": { 00:26:44.927 "block_size": 4096, 00:26:44.927 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:44.927 "name": "aio1" 00:26:44.927 }, 00:26:44.927 "method": "bdev_aio_create" 00:26:44.927 }, 00:26:44.927 { 00:26:44.927 "params": { 00:26:44.927 "trtype": "pcie", 00:26:44.927 "traddr": "0000:00:06.0", 00:26:44.927 "name": "Nvme0" 00:26:44.927 }, 00:26:44.927 "method": "bdev_nvme_attach_controller" 00:26:44.927 }, 00:26:44.927 { 00:26:44.927 "method": "bdev_wait_for_examine" 00:26:44.927 } 00:26:44.927 ] 00:26:44.927 } 00:26:44.927 ] 00:26:44.927 } 00:26:44.927 [2024-12-06 21:49:05.410121] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:44.927 [2024-12-06 21:49:05.410283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89803 ] 00:26:45.186 [2024-12-06 21:49:05.579679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.443 [2024-12-06 21:49:05.732414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.378  [2024-12-06T21:49:07.811Z] Copying: 65/65 [MB] (average 130 MBps) 00:26:47.314 00:26:47.314 21:49:07 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:47.314 21:49:07 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:26:47.314 21:49:07 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.314 21:49:07 -- common/autotest_common.sh@10 -- # set +x 00:26:47.314 { 00:26:47.314 "subsystems": [ 00:26:47.314 { 00:26:47.314 "subsystem": "bdev", 00:26:47.314 "config": [ 00:26:47.314 { 00:26:47.314 "params": { 00:26:47.314 "block_size": 4096, 00:26:47.314 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:47.314 "name": "aio1" 00:26:47.314 }, 00:26:47.314 "method": "bdev_aio_create" 00:26:47.314 }, 00:26:47.314 { 00:26:47.314 "params": { 00:26:47.314 "trtype": "pcie", 00:26:47.314 "traddr": "0000:00:06.0", 00:26:47.314 "name": "Nvme0" 00:26:47.314 }, 00:26:47.314 "method": "bdev_nvme_attach_controller" 00:26:47.314 }, 00:26:47.314 { 00:26:47.314 "method": "bdev_wait_for_examine" 00:26:47.314 } 00:26:47.314 ] 00:26:47.314 } 00:26:47.314 ] 00:26:47.314 } 00:26:47.314 [2024-12-06 21:49:07.650884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:47.314 [2024-12-06 21:49:07.651034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89841 ] 00:26:47.573 [2024-12-06 21:49:07.822760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.573 [2024-12-06 21:49:08.025951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.140  [2024-12-06T21:49:09.575Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:49.078 00:26:49.078 21:49:09 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:49.078 21:49:09 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:49.078 21:49:09 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:26:49.078 21:49:09 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:26:49.078 21:49:09 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:26:49.078 21:49:09 -- dd/common.sh@31 -- # xtrace_disable 00:26:49.078 21:49:09 -- common/autotest_common.sh@10 -- # set +x 00:26:49.078 { 00:26:49.078 "subsystems": [ 00:26:49.078 { 00:26:49.078 "subsystem": "bdev", 00:26:49.078 "config": [ 00:26:49.078 { 00:26:49.078 "params": { 00:26:49.078 "block_size": 4096, 00:26:49.078 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:49.078 "name": "aio1" 00:26:49.078 }, 00:26:49.078 "method": "bdev_aio_create" 00:26:49.078 }, 00:26:49.078 { 00:26:49.078 "params": { 00:26:49.078 "trtype": "pcie", 00:26:49.078 "traddr": "0000:00:06.0", 00:26:49.078 "name": "Nvme0" 00:26:49.078 }, 00:26:49.078 "method": "bdev_nvme_attach_controller" 00:26:49.078 }, 00:26:49.078 { 00:26:49.078 "method": "bdev_wait_for_examine" 00:26:49.078 } 00:26:49.078 ] 00:26:49.078 } 00:26:49.078 ] 00:26:49.078 } 00:26:49.078 [2024-12-06 21:49:09.284023] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:49.078 [2024-12-06 21:49:09.284166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89866 ] 00:26:49.078 [2024-12-06 21:49:09.434481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.337 [2024-12-06 21:49:09.584844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.596  [2024-12-06T21:49:11.030Z] Copying: 65/65 [MB] (average 1101 MBps) 00:26:50.533 00:26:50.533 21:49:10 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:26:50.533 21:49:10 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:26:50.533 21:49:10 -- dd/common.sh@31 -- # xtrace_disable 00:26:50.533 21:49:10 -- common/autotest_common.sh@10 -- # set +x 00:26:50.533 { 00:26:50.533 "subsystems": [ 00:26:50.533 { 00:26:50.533 "subsystem": "bdev", 00:26:50.533 "config": [ 00:26:50.533 { 00:26:50.533 "params": { 00:26:50.533 "block_size": 4096, 00:26:50.533 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:50.533 "name": "aio1" 00:26:50.533 }, 00:26:50.533 "method": "bdev_aio_create" 00:26:50.533 }, 00:26:50.533 { 00:26:50.533 "params": { 00:26:50.533 "trtype": "pcie", 00:26:50.533 "traddr": "0000:00:06.0", 00:26:50.533 "name": "Nvme0" 00:26:50.533 }, 00:26:50.533 "method": "bdev_nvme_attach_controller" 00:26:50.533 }, 00:26:50.533 { 00:26:50.533 "method": "bdev_wait_for_examine" 00:26:50.533 } 00:26:50.533 ] 00:26:50.533 } 00:26:50.533 ] 00:26:50.533 } 00:26:50.533 [2024-12-06 21:49:10.943777] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:50.534 [2024-12-06 21:49:10.943930] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89894 ] 00:26:50.793 [2024-12-06 21:49:11.113898] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.793 [2024-12-06 21:49:11.264494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.361  [2024-12-06T21:49:12.797Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:26:52.300 00:26:52.300 21:49:12 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:26:52.300 21:49:12 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:26:52.300 00:26:52.300 real 0m7.099s 00:26:52.300 user 0m5.337s 00:26:52.300 sys 0m0.910s 00:26:52.300 21:49:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:52.300 ************************************ 00:26:52.300 END TEST dd_offset_magic 00:26:52.300 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:26:52.300 ************************************ 00:26:52.300 21:49:12 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:26:52.300 21:49:12 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:26:52.300 21:49:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:52.300 21:49:12 -- dd/common.sh@11 -- # local nvme_ref= 00:26:52.300 21:49:12 -- dd/common.sh@12 -- # local size=4194330 00:26:52.300 21:49:12 -- dd/common.sh@14 -- # local bs=1048576 00:26:52.300 21:49:12 -- dd/common.sh@15 -- # local count=5 00:26:52.300 21:49:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:26:52.300 21:49:12 -- dd/common.sh@18 -- # gen_conf 00:26:52.301 21:49:12 -- dd/common.sh@31 -- # xtrace_disable 00:26:52.301 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:26:52.301 { 00:26:52.301 "subsystems": [ 00:26:52.301 { 00:26:52.301 "subsystem": "bdev", 00:26:52.301 "config": [ 00:26:52.301 { 00:26:52.301 "params": { 00:26:52.301 "block_size": 4096, 00:26:52.301 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:52.301 "name": "aio1" 00:26:52.301 }, 00:26:52.301 "method": "bdev_aio_create" 00:26:52.301 }, 00:26:52.301 { 00:26:52.301 "params": { 00:26:52.301 "trtype": "pcie", 00:26:52.301 "traddr": "0000:00:06.0", 00:26:52.301 "name": "Nvme0" 00:26:52.301 }, 00:26:52.301 "method": "bdev_nvme_attach_controller" 00:26:52.301 }, 00:26:52.301 { 00:26:52.301 "method": "bdev_wait_for_examine" 00:26:52.301 } 00:26:52.301 ] 00:26:52.301 } 00:26:52.301 ] 00:26:52.301 } 00:26:52.301 [2024-12-06 21:49:12.552595] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:52.301 [2024-12-06 21:49:12.552756] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89930 ] 00:26:52.301 [2024-12-06 21:49:12.719062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.559 [2024-12-06 21:49:12.869916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.818  [2024-12-06T21:49:14.253Z] Copying: 5120/5120 [kB] (average 1666 MBps) 00:26:53.756 00:26:53.756 21:49:14 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:26:53.756 21:49:14 -- dd/common.sh@10 -- # local bdev=aio1 00:26:53.756 21:49:14 -- dd/common.sh@11 -- # local nvme_ref= 00:26:53.756 21:49:14 -- dd/common.sh@12 -- # local size=4194330 00:26:53.756 21:49:14 -- dd/common.sh@14 -- # local bs=1048576 00:26:53.756 21:49:14 -- dd/common.sh@15 -- # local count=5 00:26:53.756 21:49:14 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:26:53.756 21:49:14 -- dd/common.sh@18 -- # gen_conf 00:26:53.756 21:49:14 -- dd/common.sh@31 -- # xtrace_disable 00:26:53.756 21:49:14 -- common/autotest_common.sh@10 -- # set +x 00:26:53.756 { 00:26:53.756 "subsystems": [ 00:26:53.756 { 00:26:53.756 "subsystem": "bdev", 00:26:53.756 "config": [ 00:26:53.756 { 00:26:53.756 "params": { 00:26:53.756 "block_size": 4096, 00:26:53.756 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:26:53.756 "name": "aio1" 00:26:53.756 }, 00:26:53.756 "method": "bdev_aio_create" 00:26:53.756 }, 00:26:53.756 { 00:26:53.756 "params": { 00:26:53.756 "trtype": "pcie", 00:26:53.756 "traddr": "0000:00:06.0", 00:26:53.756 "name": "Nvme0" 00:26:53.756 }, 00:26:53.756 "method": "bdev_nvme_attach_controller" 00:26:53.756 }, 00:26:53.756 { 00:26:53.756 "method": "bdev_wait_for_examine" 00:26:53.756 } 00:26:53.756 ] 00:26:53.756 } 00:26:53.756 ] 00:26:53.756 } 00:26:53.756 [2024-12-06 21:49:14.143585] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:53.756 [2024-12-06 21:49:14.143721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89957 ] 00:26:54.016 [2024-12-06 21:49:14.298600] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.016 [2024-12-06 21:49:14.448128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.274  [2024-12-06T21:49:15.707Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:26:55.210 00:26:55.210 21:49:15 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:26:55.210 00:26:55.210 real 0m17.033s 00:26:55.210 user 0m13.250s 00:26:55.210 sys 0m2.363s 00:26:55.210 21:49:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:55.210 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:26:55.210 ************************************ 00:26:55.210 END TEST spdk_dd_bdev_to_bdev 00:26:55.210 ************************************ 00:26:55.210 21:49:15 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:26:55.210 21:49:15 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:55.210 21:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:55.210 21:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.210 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:26:55.469 ************************************ 00:26:55.469 START TEST spdk_dd_sparse 00:26:55.469 ************************************ 00:26:55.469 21:49:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:26:55.469 * Looking for test storage... 00:26:55.469 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:55.469 21:49:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:55.469 21:49:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:55.469 21:49:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:55.469 21:49:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:55.469 21:49:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:55.469 21:49:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:55.469 21:49:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:55.469 21:49:15 -- scripts/common.sh@335 -- # IFS=.-: 00:26:55.469 21:49:15 -- scripts/common.sh@335 -- # read -ra ver1 00:26:55.469 21:49:15 -- scripts/common.sh@336 -- # IFS=.-: 00:26:55.469 21:49:15 -- scripts/common.sh@336 -- # read -ra ver2 00:26:55.469 21:49:15 -- scripts/common.sh@337 -- # local 'op=<' 00:26:55.469 21:49:15 -- scripts/common.sh@339 -- # ver1_l=2 00:26:55.469 21:49:15 -- scripts/common.sh@340 -- # ver2_l=1 00:26:55.469 21:49:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:55.469 21:49:15 -- scripts/common.sh@343 -- # case "$op" in 00:26:55.469 21:49:15 -- scripts/common.sh@344 -- # : 1 00:26:55.469 21:49:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:55.469 21:49:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:55.469 21:49:15 -- scripts/common.sh@364 -- # decimal 1 00:26:55.469 21:49:15 -- scripts/common.sh@352 -- # local d=1 00:26:55.469 21:49:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:55.469 21:49:15 -- scripts/common.sh@354 -- # echo 1 00:26:55.469 21:49:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:55.469 21:49:15 -- scripts/common.sh@365 -- # decimal 2 00:26:55.469 21:49:15 -- scripts/common.sh@352 -- # local d=2 00:26:55.469 21:49:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:55.469 21:49:15 -- scripts/common.sh@354 -- # echo 2 00:26:55.469 21:49:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:55.469 21:49:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:55.469 21:49:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:55.470 21:49:15 -- scripts/common.sh@367 -- # return 0 00:26:55.470 21:49:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:55.470 21:49:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.470 --rc genhtml_branch_coverage=1 00:26:55.470 --rc genhtml_function_coverage=1 00:26:55.470 --rc genhtml_legend=1 00:26:55.470 --rc geninfo_all_blocks=1 00:26:55.470 --rc geninfo_unexecuted_blocks=1 00:26:55.470 00:26:55.470 ' 00:26:55.470 21:49:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.470 --rc genhtml_branch_coverage=1 00:26:55.470 --rc genhtml_function_coverage=1 00:26:55.470 --rc genhtml_legend=1 00:26:55.470 --rc geninfo_all_blocks=1 00:26:55.470 --rc geninfo_unexecuted_blocks=1 00:26:55.470 00:26:55.470 ' 00:26:55.470 21:49:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.470 --rc genhtml_branch_coverage=1 00:26:55.470 --rc genhtml_function_coverage=1 00:26:55.470 --rc genhtml_legend=1 00:26:55.470 --rc geninfo_all_blocks=1 00:26:55.470 --rc geninfo_unexecuted_blocks=1 00:26:55.470 00:26:55.470 ' 00:26:55.470 21:49:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:55.470 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:55.470 --rc genhtml_branch_coverage=1 00:26:55.470 --rc genhtml_function_coverage=1 00:26:55.470 --rc genhtml_legend=1 00:26:55.470 --rc geninfo_all_blocks=1 00:26:55.470 --rc geninfo_unexecuted_blocks=1 00:26:55.470 00:26:55.470 ' 00:26:55.470 21:49:15 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:55.470 21:49:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.470 21:49:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.470 21:49:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.470 21:49:15 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:55.470 21:49:15 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:55.470 21:49:15 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:55.470 21:49:15 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:55.470 21:49:15 -- paths/export.sh@6 -- # export PATH 00:26:55.470 21:49:15 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:26:55.470 21:49:15 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:26:55.470 21:49:15 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:26:55.470 21:49:15 -- dd/sparse.sh@110 -- # file1=file_zero1 00:26:55.470 21:49:15 -- dd/sparse.sh@111 -- # file2=file_zero2 00:26:55.470 21:49:15 -- dd/sparse.sh@112 -- # file3=file_zero3 00:26:55.470 21:49:15 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:26:55.470 21:49:15 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:26:55.470 21:49:15 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:26:55.470 21:49:15 -- dd/sparse.sh@118 -- # prepare 00:26:55.470 21:49:15 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:26:55.470 21:49:15 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:26:55.470 1+0 records in 00:26:55.470 1+0 records out 00:26:55.470 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00567751 s, 739 MB/s 00:26:55.470 21:49:15 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:26:55.470 1+0 records in 00:26:55.470 1+0 records out 00:26:55.470 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00517151 s, 811 MB/s 00:26:55.470 21:49:15 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:26:55.470 1+0 records in 00:26:55.470 1+0 records out 00:26:55.470 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00485829 s, 863 MB/s 00:26:55.470 21:49:15 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:26:55.470 21:49:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:55.470 21:49:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.470 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:26:55.470 ************************************ 00:26:55.470 START TEST dd_sparse_file_to_file 00:26:55.470 ************************************ 00:26:55.470 21:49:15 -- common/autotest_common.sh@1114 -- # file_to_file 00:26:55.470 21:49:15 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:26:55.470 21:49:15 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:26:55.470 21:49:15 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:55.470 21:49:15 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:26:55.470 21:49:15 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:26:55.470 21:49:15 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:26:55.470 21:49:15 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:26:55.470 21:49:15 -- dd/sparse.sh@41 -- # gen_conf 00:26:55.470 21:49:15 -- dd/common.sh@31 -- # xtrace_disable 00:26:55.470 21:49:15 -- common/autotest_common.sh@10 -- # set +x 00:26:55.470 { 00:26:55.470 "subsystems": [ 00:26:55.470 { 00:26:55.470 "subsystem": "bdev", 00:26:55.470 "config": [ 00:26:55.470 { 00:26:55.470 "params": { 00:26:55.470 "block_size": 4096, 00:26:55.470 "filename": "dd_sparse_aio_disk", 00:26:55.470 "name": "dd_aio" 00:26:55.470 }, 00:26:55.470 "method": "bdev_aio_create" 00:26:55.470 }, 00:26:55.470 { 00:26:55.470 "params": { 00:26:55.470 "lvs_name": "dd_lvstore", 00:26:55.470 "bdev_name": "dd_aio" 00:26:55.470 }, 00:26:55.470 "method": "bdev_lvol_create_lvstore" 00:26:55.470 }, 00:26:55.470 { 00:26:55.470 "method": "bdev_wait_for_examine" 00:26:55.470 } 00:26:55.470 ] 00:26:55.470 } 00:26:55.470 ] 00:26:55.470 } 00:26:55.729 [2024-12-06 21:49:15.996230] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:55.730 [2024-12-06 21:49:15.996374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90044 ] 00:26:55.730 [2024-12-06 21:49:16.165651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.989 [2024-12-06 21:49:16.315986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.248  [2024-12-06T21:49:17.683Z] Copying: 12/36 [MB] (average 1500 MBps) 00:26:57.186 00:26:57.186 21:49:17 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:26:57.186 21:49:17 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:26:57.186 21:49:17 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:26:57.186 21:49:17 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:26:57.186 21:49:17 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:26:57.186 21:49:17 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:26:57.186 21:49:17 -- dd/sparse.sh@52 -- # stat1_b=24576 00:26:57.186 21:49:17 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:26:57.186 21:49:17 -- dd/sparse.sh@53 -- # stat2_b=24576 00:26:57.186 21:49:17 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:26:57.186 00:26:57.186 real 0m1.656s 00:26:57.186 user 0m1.303s 00:26:57.186 sys 0m0.239s 00:26:57.186 21:49:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:57.186 21:49:17 -- common/autotest_common.sh@10 -- # set +x 00:26:57.186 ************************************ 00:26:57.186 END TEST dd_sparse_file_to_file 00:26:57.186 ************************************ 00:26:57.186 21:49:17 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:26:57.186 21:49:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:57.186 21:49:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:57.186 21:49:17 -- common/autotest_common.sh@10 -- # set +x 00:26:57.186 ************************************ 00:26:57.186 START TEST dd_sparse_file_to_bdev 00:26:57.186 ************************************ 00:26:57.186 21:49:17 -- common/autotest_common.sh@1114 -- # file_to_bdev 00:26:57.186 21:49:17 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:57.186 21:49:17 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:26:57.186 21:49:17 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:26:57.186 21:49:17 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:26:57.186 21:49:17 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:26:57.186 21:49:17 -- dd/sparse.sh@73 -- # gen_conf 00:26:57.186 21:49:17 -- dd/common.sh@31 -- # xtrace_disable 00:26:57.186 21:49:17 -- common/autotest_common.sh@10 -- # set +x 00:26:57.186 { 00:26:57.186 "subsystems": [ 00:26:57.186 { 00:26:57.186 "subsystem": "bdev", 00:26:57.186 "config": [ 00:26:57.186 { 00:26:57.186 "params": { 00:26:57.186 "block_size": 4096, 00:26:57.186 "filename": "dd_sparse_aio_disk", 00:26:57.186 "name": "dd_aio" 00:26:57.186 }, 00:26:57.186 "method": "bdev_aio_create" 00:26:57.186 }, 00:26:57.186 { 00:26:57.186 "params": { 00:26:57.186 "lvs_name": "dd_lvstore", 00:26:57.186 "lvol_name": "dd_lvol", 00:26:57.186 "size": 37748736, 00:26:57.186 "thin_provision": true 00:26:57.186 }, 00:26:57.186 "method": "bdev_lvol_create" 00:26:57.186 }, 00:26:57.186 { 00:26:57.186 "method": "bdev_wait_for_examine" 00:26:57.186 } 00:26:57.186 ] 00:26:57.186 } 00:26:57.186 ] 00:26:57.186 } 00:26:57.445 [2024-12-06 21:49:17.704401] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:57.445 [2024-12-06 21:49:17.704569] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90090 ] 00:26:57.445 [2024-12-06 21:49:17.874716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.703 [2024-12-06 21:49:18.033741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.962 [2024-12-06 21:49:18.258505] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:26:57.962  [2024-12-06T21:49:18.459Z] Copying: 12/36 [MB] (average 545 MBps)[2024-12-06 21:49:18.310619] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:26:58.900 00:26:58.901 00:26:58.901 00:26:58.901 real 0m1.651s 00:26:58.901 user 0m1.327s 00:26:58.901 sys 0m0.212s 00:26:58.901 ************************************ 00:26:58.901 END TEST dd_sparse_file_to_bdev 00:26:58.901 ************************************ 00:26:58.901 21:49:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:58.901 21:49:19 -- common/autotest_common.sh@10 -- # set +x 00:26:58.901 21:49:19 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:26:58.901 21:49:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:58.901 21:49:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:58.901 21:49:19 -- common/autotest_common.sh@10 -- # set +x 00:26:58.901 ************************************ 00:26:58.901 START TEST dd_sparse_bdev_to_file 00:26:58.901 ************************************ 00:26:58.901 21:49:19 -- common/autotest_common.sh@1114 -- # bdev_to_file 00:26:58.901 21:49:19 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:26:58.901 21:49:19 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:26:58.901 21:49:19 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:26:58.901 21:49:19 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:26:58.901 21:49:19 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:26:58.901 21:49:19 -- dd/sparse.sh@91 -- # gen_conf 00:26:58.901 21:49:19 -- dd/common.sh@31 -- # xtrace_disable 00:26:58.901 21:49:19 -- common/autotest_common.sh@10 -- # set +x 00:26:58.901 { 00:26:58.901 "subsystems": [ 00:26:58.901 { 00:26:58.901 "subsystem": "bdev", 00:26:58.901 "config": [ 00:26:58.901 { 00:26:58.901 "params": { 00:26:58.901 "block_size": 4096, 00:26:58.901 "filename": "dd_sparse_aio_disk", 00:26:58.901 "name": "dd_aio" 00:26:58.901 }, 00:26:58.901 "method": "bdev_aio_create" 00:26:58.901 }, 00:26:58.901 { 00:26:58.901 "method": "bdev_wait_for_examine" 00:26:58.901 } 00:26:58.901 ] 00:26:58.901 } 00:26:58.901 ] 00:26:58.901 } 00:26:59.160 [2024-12-06 21:49:19.409410] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:59.160 [2024-12-06 21:49:19.409575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90138 ] 00:26:59.160 [2024-12-06 21:49:19.578830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.421 [2024-12-06 21:49:19.728048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.679  [2024-12-06T21:49:21.112Z] Copying: 12/36 [MB] (average 1333 MBps) 00:27:00.615 00:27:00.615 21:49:20 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:27:00.615 21:49:20 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:27:00.615 21:49:20 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:27:00.615 21:49:20 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:27:00.615 21:49:20 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:27:00.615 21:49:20 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:27:00.615 21:49:20 -- dd/sparse.sh@102 -- # stat2_b=24576 00:27:00.615 21:49:20 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:27:00.615 21:49:20 -- dd/sparse.sh@103 -- # stat3_b=24576 00:27:00.615 21:49:20 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:27:00.615 00:27:00.615 real 0m1.646s 00:27:00.615 user 0m1.323s 00:27:00.615 sys 0m0.217s 00:27:00.615 21:49:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.615 21:49:20 -- common/autotest_common.sh@10 -- # set +x 00:27:00.615 ************************************ 00:27:00.615 END TEST dd_sparse_bdev_to_file 00:27:00.615 ************************************ 00:27:00.615 21:49:21 -- dd/sparse.sh@1 -- # cleanup 00:27:00.615 21:49:21 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:27:00.615 21:49:21 -- dd/sparse.sh@12 -- # rm file_zero1 00:27:00.615 21:49:21 -- dd/sparse.sh@13 -- # rm file_zero2 00:27:00.615 21:49:21 -- dd/sparse.sh@14 -- # rm file_zero3 00:27:00.615 00:27:00.615 real 0m5.340s 00:27:00.615 user 0m4.114s 00:27:00.615 sys 0m0.891s 00:27:00.615 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:00.615 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:00.615 ************************************ 00:27:00.615 END TEST spdk_dd_sparse 00:27:00.615 ************************************ 00:27:00.615 21:49:21 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:00.615 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.615 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.615 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:00.615 ************************************ 00:27:00.615 START TEST spdk_dd_negative 00:27:00.615 ************************************ 00:27:00.615 21:49:21 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:27:00.874 * Looking for test storage... 00:27:00.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:00.874 21:49:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:00.874 21:49:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:00.874 21:49:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:00.874 21:49:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:00.874 21:49:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:00.874 21:49:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:00.874 21:49:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:00.874 21:49:21 -- scripts/common.sh@335 -- # IFS=.-: 00:27:00.874 21:49:21 -- scripts/common.sh@335 -- # read -ra ver1 00:27:00.874 21:49:21 -- scripts/common.sh@336 -- # IFS=.-: 00:27:00.874 21:49:21 -- scripts/common.sh@336 -- # read -ra ver2 00:27:00.874 21:49:21 -- scripts/common.sh@337 -- # local 'op=<' 00:27:00.874 21:49:21 -- scripts/common.sh@339 -- # ver1_l=2 00:27:00.874 21:49:21 -- scripts/common.sh@340 -- # ver2_l=1 00:27:00.874 21:49:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:00.874 21:49:21 -- scripts/common.sh@343 -- # case "$op" in 00:27:00.874 21:49:21 -- scripts/common.sh@344 -- # : 1 00:27:00.874 21:49:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:00.874 21:49:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:00.874 21:49:21 -- scripts/common.sh@364 -- # decimal 1 00:27:00.874 21:49:21 -- scripts/common.sh@352 -- # local d=1 00:27:00.874 21:49:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:00.874 21:49:21 -- scripts/common.sh@354 -- # echo 1 00:27:00.874 21:49:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:00.874 21:49:21 -- scripts/common.sh@365 -- # decimal 2 00:27:00.874 21:49:21 -- scripts/common.sh@352 -- # local d=2 00:27:00.874 21:49:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:00.874 21:49:21 -- scripts/common.sh@354 -- # echo 2 00:27:00.874 21:49:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:00.874 21:49:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:00.874 21:49:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:00.874 21:49:21 -- scripts/common.sh@367 -- # return 0 00:27:00.874 21:49:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:00.874 21:49:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:00.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.874 --rc genhtml_branch_coverage=1 00:27:00.874 --rc genhtml_function_coverage=1 00:27:00.874 --rc genhtml_legend=1 00:27:00.874 --rc geninfo_all_blocks=1 00:27:00.874 --rc geninfo_unexecuted_blocks=1 00:27:00.874 00:27:00.874 ' 00:27:00.874 21:49:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:00.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.874 --rc genhtml_branch_coverage=1 00:27:00.874 --rc genhtml_function_coverage=1 00:27:00.874 --rc genhtml_legend=1 00:27:00.874 --rc geninfo_all_blocks=1 00:27:00.874 --rc geninfo_unexecuted_blocks=1 00:27:00.874 00:27:00.874 ' 00:27:00.874 21:49:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:00.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.874 --rc genhtml_branch_coverage=1 00:27:00.874 --rc genhtml_function_coverage=1 00:27:00.874 --rc genhtml_legend=1 00:27:00.874 --rc geninfo_all_blocks=1 00:27:00.874 --rc geninfo_unexecuted_blocks=1 00:27:00.874 00:27:00.874 ' 00:27:00.874 21:49:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:00.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:00.874 --rc genhtml_branch_coverage=1 00:27:00.874 --rc genhtml_function_coverage=1 00:27:00.874 --rc genhtml_legend=1 00:27:00.874 --rc geninfo_all_blocks=1 00:27:00.874 --rc geninfo_unexecuted_blocks=1 00:27:00.874 00:27:00.874 ' 00:27:00.874 21:49:21 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:00.874 21:49:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:00.874 21:49:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:00.874 21:49:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:00.874 21:49:21 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.875 21:49:21 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.875 21:49:21 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.875 21:49:21 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.875 21:49:21 -- paths/export.sh@6 -- # export PATH 00:27:00.875 21:49:21 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:27:00.875 21:49:21 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:00.875 21:49:21 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.875 21:49:21 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:00.875 21:49:21 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:00.875 21:49:21 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:27:00.875 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.875 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.875 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:00.875 ************************************ 00:27:00.875 START TEST dd_invalid_arguments 00:27:00.875 ************************************ 00:27:00.875 21:49:21 -- common/autotest_common.sh@1114 -- # invalid_arguments 00:27:00.875 21:49:21 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:00.875 21:49:21 -- common/autotest_common.sh@650 -- # local es=0 00:27:00.875 21:49:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:00.875 21:49:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.875 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.875 21:49:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.875 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.875 21:49:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.875 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:00.875 21:49:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:00.875 21:49:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:00.875 21:49:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:27:00.875 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:27:00.875 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:27:00.875 options: 00:27:00.875 -c, --config JSON config file (default none) 00:27:00.875 --json JSON config file (default none) 00:27:00.875 --json-ignore-init-errors 00:27:00.875 don't exit on invalid config entry 00:27:00.875 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:27:00.875 -g, --single-file-segments 00:27:00.875 force creating just one hugetlbfs file 00:27:00.875 -h, --help show this usage 00:27:00.875 -i, --shm-id shared memory ID (optional) 00:27:00.875 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:27:00.875 --lcores lcore to CPU mapping list. The list is in the format: 00:27:00.875 [<,lcores[@CPUs]>...] 00:27:00.875 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:27:00.875 Within the group, '-' is used for range separator, 00:27:00.875 ',' is used for single number separator. 00:27:00.875 '( )' can be omitted for single element group, 00:27:00.875 '@' can be omitted if cpus and lcores have the same value 00:27:00.875 -n, --mem-channels channel number of memory channels used for DPDK 00:27:00.875 -p, --main-core main (primary) core for DPDK 00:27:00.875 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:27:00.875 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:27:00.875 --disable-cpumask-locks Disable CPU core lock files. 00:27:00.875 --silence-noticelog disable notice level logging to stderr 00:27:00.875 --msg-mempool-size global message memory pool size in count (default: 262143) 00:27:00.875 -u, --no-pci disable PCI access 00:27:00.875 --wait-for-rpc wait for RPCs to initialize subsystems 00:27:00.875 --max-delay maximum reactor delay (in microseconds) 00:27:00.875 -B, --pci-blocked pci addr to block (can be used more than once) 00:27:00.875 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:27:00.875 -R, --huge-unlink unlink huge files after initialization 00:27:00.875 -v, --version print SPDK version 00:27:00.875 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:27:00.875 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:27:00.875 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:27:00.875 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:27:00.875 Tracepoints vary in size and can use more than one trace entry. 00:27:00.875 --rpcs-allowed comma-separated list of permitted RPCS 00:27:00.875 --env-context Opaque context for use of the env implementation 00:27:00.875 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:27:00.875 --no-huge run without using hugepages 00:27:00.875 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:27:00.875 -e, --tpoint-group [:] 00:27:00.875 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:27:00.875 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:27:00.875 Groups and [2024-12-06 21:49:21.344221] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:27:01.135 masks can be combined (e.g. thread,bdev:0x1). 00:27:01.135 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:27:01.135 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:27:01.135 [--------- DD Options ---------] 00:27:01.135 --if Input file. Must specify either --if or --ib. 00:27:01.135 --ib Input bdev. Must specifier either --if or --ib 00:27:01.135 --of Output file. Must specify either --of or --ob. 00:27:01.135 --ob Output bdev. Must specify either --of or --ob. 00:27:01.135 --iflag Input file flags. 00:27:01.135 --oflag Output file flags. 00:27:01.135 --bs I/O unit size (default: 4096) 00:27:01.135 --qd Queue depth (default: 2) 00:27:01.135 --count I/O unit count. The number of I/O units to copy. (default: all) 00:27:01.135 --skip Skip this many I/O units at start of input. (default: 0) 00:27:01.135 --seek Skip this many I/O units at start of output. (default: 0) 00:27:01.135 --aio Force usage of AIO. (by default io_uring is used if available) 00:27:01.135 --sparse Enable hole skipping in input target 00:27:01.135 Available iflag and oflag values: 00:27:01.135 append - append mode 00:27:01.135 direct - use direct I/O for data 00:27:01.135 directory - fail unless a directory 00:27:01.135 dsync - use synchronized I/O for data 00:27:01.135 noatime - do not update access time 00:27:01.135 noctty - do not assign controlling terminal from file 00:27:01.135 nofollow - do not follow symlinks 00:27:01.135 nonblock - use non-blocking I/O 00:27:01.135 sync - use synchronized I/O for data and metadata 00:27:01.135 21:49:21 -- common/autotest_common.sh@653 -- # es=2 00:27:01.135 21:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.135 21:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.135 21:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.135 00:27:01.135 real 0m0.112s 00:27:01.135 user 0m0.068s 00:27:01.135 sys 0m0.045s 00:27:01.135 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.135 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.135 ************************************ 00:27:01.135 END TEST dd_invalid_arguments 00:27:01.135 ************************************ 00:27:01.135 21:49:21 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:27:01.135 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.135 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.135 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.135 ************************************ 00:27:01.135 START TEST dd_double_input 00:27:01.135 ************************************ 00:27:01.135 21:49:21 -- common/autotest_common.sh@1114 -- # double_input 00:27:01.136 21:49:21 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:01.136 21:49:21 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.136 21:49:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:01.136 21:49:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.136 21:49:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:27:01.136 [2024-12-06 21:49:21.494199] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:27:01.136 21:49:21 -- common/autotest_common.sh@653 -- # es=22 00:27:01.136 21:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.136 21:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.136 21:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.136 00:27:01.136 real 0m0.093s 00:27:01.136 user 0m0.048s 00:27:01.136 sys 0m0.046s 00:27:01.136 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.136 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.136 ************************************ 00:27:01.136 END TEST dd_double_input 00:27:01.136 ************************************ 00:27:01.136 21:49:21 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:27:01.136 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.136 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.136 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.136 ************************************ 00:27:01.136 START TEST dd_double_output 00:27:01.136 ************************************ 00:27:01.136 21:49:21 -- common/autotest_common.sh@1114 -- # double_output 00:27:01.136 21:49:21 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:01.136 21:49:21 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.136 21:49:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:01.136 21:49:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.136 21:49:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.136 21:49:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:27:01.418 [2024-12-06 21:49:21.642085] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:27:01.418 21:49:21 -- common/autotest_common.sh@653 -- # es=22 00:27:01.418 21:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.418 21:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.418 21:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.418 00:27:01.418 real 0m0.111s 00:27:01.418 user 0m0.061s 00:27:01.418 sys 0m0.051s 00:27:01.418 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.418 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 ************************************ 00:27:01.418 END TEST dd_double_output 00:27:01.418 ************************************ 00:27:01.418 21:49:21 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:27:01.418 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.418 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.418 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 ************************************ 00:27:01.418 START TEST dd_no_input 00:27:01.418 ************************************ 00:27:01.418 21:49:21 -- common/autotest_common.sh@1114 -- # no_input 00:27:01.418 21:49:21 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:01.418 21:49:21 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.418 21:49:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:01.418 21:49:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.418 21:49:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:27:01.418 [2024-12-06 21:49:21.790091] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:27:01.418 21:49:21 -- common/autotest_common.sh@653 -- # es=22 00:27:01.418 21:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.418 21:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.418 21:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.418 00:27:01.418 real 0m0.092s 00:27:01.418 user 0m0.048s 00:27:01.418 sys 0m0.044s 00:27:01.418 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.418 ************************************ 00:27:01.418 END TEST dd_no_input 00:27:01.418 ************************************ 00:27:01.418 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 21:49:21 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:27:01.418 21:49:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.418 21:49:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.418 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.418 ************************************ 00:27:01.418 START TEST dd_no_output 00:27:01.418 ************************************ 00:27:01.418 21:49:21 -- common/autotest_common.sh@1114 -- # no_output 00:27:01.418 21:49:21 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:01.418 21:49:21 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.418 21:49:21 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:01.418 21:49:21 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.418 21:49:21 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.418 21:49:21 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:01.678 [2024-12-06 21:49:21.940932] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:27:01.678 21:49:21 -- common/autotest_common.sh@653 -- # es=22 00:27:01.678 21:49:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.678 21:49:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.678 21:49:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.678 00:27:01.678 real 0m0.112s 00:27:01.678 user 0m0.059s 00:27:01.678 sys 0m0.053s 00:27:01.678 21:49:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.678 21:49:21 -- common/autotest_common.sh@10 -- # set +x 00:27:01.678 ************************************ 00:27:01.678 END TEST dd_no_output 00:27:01.678 ************************************ 00:27:01.678 21:49:22 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:27:01.678 21:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.678 21:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.678 21:49:22 -- common/autotest_common.sh@10 -- # set +x 00:27:01.678 ************************************ 00:27:01.678 START TEST dd_wrong_blocksize 00:27:01.678 ************************************ 00:27:01.678 21:49:22 -- common/autotest_common.sh@1114 -- # wrong_blocksize 00:27:01.678 21:49:22 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:01.678 21:49:22 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.678 21:49:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:01.678 21:49:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.678 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.678 21:49:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.678 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.678 21:49:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.678 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.678 21:49:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.678 21:49:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.678 21:49:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:27:01.678 [2024-12-06 21:49:22.090280] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:27:01.678 21:49:22 -- common/autotest_common.sh@653 -- # es=22 00:27:01.678 21:49:22 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:01.678 21:49:22 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:01.678 21:49:22 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:01.678 00:27:01.678 real 0m0.080s 00:27:01.678 user 0m0.047s 00:27:01.678 sys 0m0.034s 00:27:01.678 21:49:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:01.678 ************************************ 00:27:01.678 END TEST dd_wrong_blocksize 00:27:01.678 21:49:22 -- common/autotest_common.sh@10 -- # set +x 00:27:01.678 ************************************ 00:27:01.678 21:49:22 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:27:01.678 21:49:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:01.678 21:49:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:01.678 21:49:22 -- common/autotest_common.sh@10 -- # set +x 00:27:01.938 ************************************ 00:27:01.938 START TEST dd_smaller_blocksize 00:27:01.938 ************************************ 00:27:01.938 21:49:22 -- common/autotest_common.sh@1114 -- # smaller_blocksize 00:27:01.938 21:49:22 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:01.938 21:49:22 -- common/autotest_common.sh@650 -- # local es=0 00:27:01.938 21:49:22 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:01.938 21:49:22 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.938 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.938 21:49:22 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.938 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.938 21:49:22 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.938 21:49:22 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:01.938 21:49:22 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.938 21:49:22 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:01.938 21:49:22 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:27:01.938 [2024-12-06 21:49:22.245235] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:01.938 [2024-12-06 21:49:22.245392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90381 ] 00:27:01.938 [2024-12-06 21:49:22.414697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.198 [2024-12-06 21:49:22.564406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.765 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:27:02.765 [2024-12-06 21:49:23.000519] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:27:02.765 [2024-12-06 21:49:23.000593] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:03.334 [2024-12-06 21:49:23.553887] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:03.593 21:49:23 -- common/autotest_common.sh@653 -- # es=244 00:27:03.593 21:49:23 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.593 21:49:23 -- common/autotest_common.sh@662 -- # es=116 00:27:03.593 21:49:23 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:03.593 21:49:23 -- common/autotest_common.sh@670 -- # es=1 00:27:03.593 21:49:23 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.593 00:27:03.593 real 0m1.716s 00:27:03.593 user 0m1.255s 00:27:03.593 sys 0m0.359s 00:27:03.593 21:49:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.593 ************************************ 00:27:03.593 END TEST dd_smaller_blocksize 00:27:03.593 ************************************ 00:27:03.593 21:49:23 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 21:49:23 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:27:03.593 21:49:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.593 21:49:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.593 21:49:23 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 ************************************ 00:27:03.593 START TEST dd_invalid_count 00:27:03.593 ************************************ 00:27:03.593 21:49:23 -- common/autotest_common.sh@1114 -- # invalid_count 00:27:03.593 21:49:23 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:03.593 21:49:23 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.593 21:49:23 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:03.593 21:49:23 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:23 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:23 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:23 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:23 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:23 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.593 21:49:23 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:27:03.593 [2024-12-06 21:49:23.990562] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:27:03.593 21:49:24 -- common/autotest_common.sh@653 -- # es=22 00:27:03.593 21:49:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.593 21:49:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.593 21:49:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.593 00:27:03.593 real 0m0.089s 00:27:03.593 user 0m0.051s 00:27:03.593 sys 0m0.038s 00:27:03.593 21:49:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.593 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 ************************************ 00:27:03.593 END TEST dd_invalid_count 00:27:03.593 ************************************ 00:27:03.593 21:49:24 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:27:03.593 21:49:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.593 21:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.593 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:03.593 ************************************ 00:27:03.593 START TEST dd_invalid_oflag 00:27:03.593 ************************************ 00:27:03.593 21:49:24 -- common/autotest_common.sh@1114 -- # invalid_oflag 00:27:03.593 21:49:24 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:03.593 21:49:24 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.593 21:49:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:03.593 21:49:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.593 21:49:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.593 21:49:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.593 21:49:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:27:03.852 [2024-12-06 21:49:24.126930] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:27:03.852 21:49:24 -- common/autotest_common.sh@653 -- # es=22 00:27:03.852 21:49:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.852 21:49:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.852 21:49:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.852 00:27:03.852 real 0m0.075s 00:27:03.852 user 0m0.030s 00:27:03.852 sys 0m0.045s 00:27:03.852 21:49:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.852 ************************************ 00:27:03.852 END TEST dd_invalid_oflag 00:27:03.852 ************************************ 00:27:03.852 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:03.852 21:49:24 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:27:03.852 21:49:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:03.852 21:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:03.852 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:03.852 ************************************ 00:27:03.852 START TEST dd_invalid_iflag 00:27:03.852 ************************************ 00:27:03.852 21:49:24 -- common/autotest_common.sh@1114 -- # invalid_iflag 00:27:03.852 21:49:24 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:03.852 21:49:24 -- common/autotest_common.sh@650 -- # local es=0 00:27:03.852 21:49:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:03.852 21:49:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.852 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.852 21:49:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.852 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.852 21:49:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.852 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:03.852 21:49:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:03.852 21:49:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:03.852 21:49:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:27:03.852 [2024-12-06 21:49:24.262893] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:27:03.852 21:49:24 -- common/autotest_common.sh@653 -- # es=22 00:27:03.852 21:49:24 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:03.852 21:49:24 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:03.852 21:49:24 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:03.852 00:27:03.852 real 0m0.109s 00:27:03.852 user 0m0.063s 00:27:03.852 sys 0m0.046s 00:27:03.852 21:49:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:03.852 ************************************ 00:27:03.852 END TEST dd_invalid_iflag 00:27:03.852 ************************************ 00:27:03.853 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:04.111 21:49:24 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:27:04.111 21:49:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:04.111 21:49:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:04.111 21:49:24 -- common/autotest_common.sh@10 -- # set +x 00:27:04.111 ************************************ 00:27:04.111 START TEST dd_unknown_flag 00:27:04.111 ************************************ 00:27:04.111 21:49:24 -- common/autotest_common.sh@1114 -- # unknown_flag 00:27:04.111 21:49:24 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:04.111 21:49:24 -- common/autotest_common.sh@650 -- # local es=0 00:27:04.111 21:49:24 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:04.111 21:49:24 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.111 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.111 21:49:24 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.111 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.111 21:49:24 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.111 21:49:24 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:04.111 21:49:24 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:04.111 21:49:24 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:04.111 21:49:24 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:27:04.111 [2024-12-06 21:49:24.431155] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:04.111 [2024-12-06 21:49:24.431315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90487 ] 00:27:04.111 [2024-12-06 21:49:24.598836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.371 [2024-12-06 21:49:24.746650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.631 [2024-12-06 21:49:24.965689] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:27:04.631 [2024-12-06 21:49:24.965760] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:04.631 [2024-12-06 21:49:24.965776] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:27:04.631 [2024-12-06 21:49:24.965793] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:05.198 [2024-12-06 21:49:25.510116] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:05.456 21:49:25 -- common/autotest_common.sh@653 -- # es=236 00:27:05.456 21:49:25 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:05.456 21:49:25 -- common/autotest_common.sh@662 -- # es=108 00:27:05.456 21:49:25 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:05.456 21:49:25 -- common/autotest_common.sh@670 -- # es=1 00:27:05.456 21:49:25 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:05.456 00:27:05.456 real 0m1.493s 00:27:05.456 user 0m1.206s 00:27:05.456 sys 0m0.185s 00:27:05.456 21:49:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:05.456 ************************************ 00:27:05.456 END TEST dd_unknown_flag 00:27:05.456 ************************************ 00:27:05.456 21:49:25 -- common/autotest_common.sh@10 -- # set +x 00:27:05.456 21:49:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:27:05.456 21:49:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:05.456 21:49:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:05.456 21:49:25 -- common/autotest_common.sh@10 -- # set +x 00:27:05.456 ************************************ 00:27:05.456 START TEST dd_invalid_json 00:27:05.456 ************************************ 00:27:05.456 21:49:25 -- common/autotest_common.sh@1114 -- # invalid_json 00:27:05.456 21:49:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:05.456 21:49:25 -- common/autotest_common.sh@650 -- # local es=0 00:27:05.456 21:49:25 -- dd/negative_dd.sh@95 -- # : 00:27:05.456 21:49:25 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:05.456 21:49:25 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.456 21:49:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.456 21:49:25 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.456 21:49:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.456 21:49:25 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.456 21:49:25 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:05.456 21:49:25 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:05.456 21:49:25 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:05.456 21:49:25 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:27:05.715 [2024-12-06 21:49:25.975358] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:05.715 [2024-12-06 21:49:25.975536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90522 ] 00:27:05.715 [2024-12-06 21:49:26.143286] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.973 [2024-12-06 21:49:26.295152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.973 [2024-12-06 21:49:26.295338] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:27:05.973 [2024-12-06 21:49:26.295369] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:05.973 [2024-12-06 21:49:26.295430] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:27:06.231 21:49:26 -- common/autotest_common.sh@653 -- # es=234 00:27:06.231 21:49:26 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:06.231 21:49:26 -- common/autotest_common.sh@662 -- # es=106 00:27:06.231 21:49:26 -- common/autotest_common.sh@663 -- # case "$es" in 00:27:06.231 21:49:26 -- common/autotest_common.sh@670 -- # es=1 00:27:06.231 21:49:26 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:06.231 00:27:06.231 real 0m0.717s 00:27:06.231 user 0m0.503s 00:27:06.231 sys 0m0.115s 00:27:06.231 21:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.231 21:49:26 -- common/autotest_common.sh@10 -- # set +x 00:27:06.231 ************************************ 00:27:06.231 END TEST dd_invalid_json 00:27:06.231 ************************************ 00:27:06.231 00:27:06.231 real 0m5.565s 00:27:06.231 user 0m3.701s 00:27:06.231 sys 0m1.525s 00:27:06.231 21:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.231 21:49:26 -- common/autotest_common.sh@10 -- # set +x 00:27:06.231 ************************************ 00:27:06.231 END TEST spdk_dd_negative 00:27:06.231 ************************************ 00:27:06.231 00:27:06.231 real 2m10.838s 00:27:06.231 user 1m42.591s 00:27:06.231 sys 0m18.234s 00:27:06.231 21:49:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.231 ************************************ 00:27:06.231 END TEST spdk_dd 00:27:06.231 21:49:26 -- common/autotest_common.sh@10 -- # set +x 00:27:06.231 ************************************ 00:27:06.489 21:49:26 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:27:06.489 21:49:26 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:06.489 21:49:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.489 21:49:26 -- common/autotest_common.sh@10 -- # set +x 00:27:06.489 ************************************ 00:27:06.489 START TEST blockdev_nvme 00:27:06.489 ************************************ 00:27:06.489 21:49:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:27:06.489 * Looking for test storage... 00:27:06.489 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:06.489 21:49:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:06.489 21:49:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:06.489 21:49:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:06.489 21:49:26 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:06.489 21:49:26 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:06.489 21:49:26 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:06.489 21:49:26 -- scripts/common.sh@335 -- # IFS=.-: 00:27:06.489 21:49:26 -- scripts/common.sh@335 -- # read -ra ver1 00:27:06.489 21:49:26 -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.489 21:49:26 -- scripts/common.sh@336 -- # read -ra ver2 00:27:06.489 21:49:26 -- scripts/common.sh@337 -- # local 'op=<' 00:27:06.489 21:49:26 -- scripts/common.sh@339 -- # ver1_l=2 00:27:06.489 21:49:26 -- scripts/common.sh@340 -- # ver2_l=1 00:27:06.489 21:49:26 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:06.489 21:49:26 -- scripts/common.sh@343 -- # case "$op" in 00:27:06.489 21:49:26 -- scripts/common.sh@344 -- # : 1 00:27:06.489 21:49:26 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:06.489 21:49:26 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.489 21:49:26 -- scripts/common.sh@364 -- # decimal 1 00:27:06.489 21:49:26 -- scripts/common.sh@352 -- # local d=1 00:27:06.489 21:49:26 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.489 21:49:26 -- scripts/common.sh@354 -- # echo 1 00:27:06.489 21:49:26 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:06.489 21:49:26 -- scripts/common.sh@365 -- # decimal 2 00:27:06.489 21:49:26 -- scripts/common.sh@352 -- # local d=2 00:27:06.489 21:49:26 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.489 21:49:26 -- scripts/common.sh@354 -- # echo 2 00:27:06.489 21:49:26 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:06.489 21:49:26 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:06.489 21:49:26 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:06.489 21:49:26 -- scripts/common.sh@367 -- # return 0 00:27:06.489 21:49:26 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.489 --rc genhtml_branch_coverage=1 00:27:06.489 --rc genhtml_function_coverage=1 00:27:06.489 --rc genhtml_legend=1 00:27:06.489 --rc geninfo_all_blocks=1 00:27:06.489 --rc geninfo_unexecuted_blocks=1 00:27:06.489 00:27:06.489 ' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.489 --rc genhtml_branch_coverage=1 00:27:06.489 --rc genhtml_function_coverage=1 00:27:06.489 --rc genhtml_legend=1 00:27:06.489 --rc geninfo_all_blocks=1 00:27:06.489 --rc geninfo_unexecuted_blocks=1 00:27:06.489 00:27:06.489 ' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.489 --rc genhtml_branch_coverage=1 00:27:06.489 --rc genhtml_function_coverage=1 00:27:06.489 --rc genhtml_legend=1 00:27:06.489 --rc geninfo_all_blocks=1 00:27:06.489 --rc geninfo_unexecuted_blocks=1 00:27:06.489 00:27:06.489 ' 00:27:06.489 21:49:26 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:06.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.489 --rc genhtml_branch_coverage=1 00:27:06.489 --rc genhtml_function_coverage=1 00:27:06.489 --rc genhtml_legend=1 00:27:06.489 --rc geninfo_all_blocks=1 00:27:06.489 --rc geninfo_unexecuted_blocks=1 00:27:06.489 00:27:06.489 ' 00:27:06.489 21:49:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:06.489 21:49:26 -- bdev/nbd_common.sh@6 -- # set -e 00:27:06.489 21:49:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:06.489 21:49:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:06.489 21:49:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:06.489 21:49:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:06.489 21:49:26 -- bdev/blockdev.sh@18 -- # : 00:27:06.489 21:49:26 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:06.489 21:49:26 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:06.489 21:49:26 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:06.489 21:49:26 -- bdev/blockdev.sh@672 -- # uname -s 00:27:06.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.489 21:49:26 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:06.489 21:49:26 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:06.489 21:49:26 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:27:06.489 21:49:26 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:06.489 21:49:26 -- bdev/blockdev.sh@682 -- # dek= 00:27:06.489 21:49:26 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:06.489 21:49:26 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:06.489 21:49:26 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:06.489 21:49:26 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:27:06.489 21:49:26 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:27:06.489 21:49:26 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:06.489 21:49:26 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=90615 00:27:06.489 21:49:26 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:06.489 21:49:26 -- bdev/blockdev.sh@47 -- # waitforlisten 90615 00:27:06.489 21:49:26 -- common/autotest_common.sh@829 -- # '[' -z 90615 ']' 00:27:06.489 21:49:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.489 21:49:26 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:06.489 21:49:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.489 21:49:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.489 21:49:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.489 21:49:26 -- common/autotest_common.sh@10 -- # set +x 00:27:06.489 [2024-12-06 21:49:26.985298] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:06.489 [2024-12-06 21:49:26.985508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90615 ] 00:27:06.781 [2024-12-06 21:49:27.155049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.053 [2024-12-06 21:49:27.313345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:07.053 [2024-12-06 21:49:27.313545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.634 21:49:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.634 21:49:27 -- common/autotest_common.sh@862 -- # return 0 00:27:07.634 21:49:27 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:07.634 21:49:27 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:27:07.634 21:49:27 -- bdev/blockdev.sh@79 -- # local json 00:27:07.634 21:49:27 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:07.634 21:49:27 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:07.634 21:49:27 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:07.634 21:49:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:27 -- common/autotest_common.sh@10 -- # set +x 00:27:07.634 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.634 21:49:28 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:07.634 21:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.634 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.634 21:49:28 -- bdev/blockdev.sh@738 -- # cat 00:27:07.634 21:49:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:07.634 21:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.634 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.634 21:49:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:07.634 21:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.634 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.634 21:49:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:07.634 21:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.634 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.634 21:49:28 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:07.634 21:49:28 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:07.634 21:49:28 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:07.634 21:49:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.634 21:49:28 -- common/autotest_common.sh@10 -- # set +x 00:27:07.894 21:49:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.894 21:49:28 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:07.894 21:49:28 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:07.894 21:49:28 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "0cef6764-92c7-40c2-961c-3a0d0b96a6d4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0cef6764-92c7-40c2-961c-3a0d0b96a6d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:27:07.894 21:49:28 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:07.894 21:49:28 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:27:07.894 21:49:28 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:07.894 21:49:28 -- bdev/blockdev.sh@752 -- # killprocess 90615 00:27:07.894 21:49:28 -- common/autotest_common.sh@936 -- # '[' -z 90615 ']' 00:27:07.894 21:49:28 -- common/autotest_common.sh@940 -- # kill -0 90615 00:27:07.894 21:49:28 -- common/autotest_common.sh@941 -- # uname 00:27:07.894 21:49:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:07.894 21:49:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90615 00:27:07.894 21:49:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:07.894 killing process with pid 90615 00:27:07.894 21:49:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:07.894 21:49:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90615' 00:27:07.894 21:49:28 -- common/autotest_common.sh@955 -- # kill 90615 00:27:07.894 21:49:28 -- common/autotest_common.sh@960 -- # wait 90615 00:27:09.798 21:49:29 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:09.798 21:49:29 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:09.798 21:49:29 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:09.798 21:49:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:09.798 21:49:29 -- common/autotest_common.sh@10 -- # set +x 00:27:09.798 ************************************ 00:27:09.798 START TEST bdev_hello_world 00:27:09.798 ************************************ 00:27:09.798 21:49:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:27:09.798 [2024-12-06 21:49:29.910933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:09.798 [2024-12-06 21:49:29.911033] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90686 ] 00:27:09.798 [2024-12-06 21:49:30.062388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.798 [2024-12-06 21:49:30.209274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.366 [2024-12-06 21:49:30.556016] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:10.367 [2024-12-06 21:49:30.556096] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:27:10.367 [2024-12-06 21:49:30.556120] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:10.367 [2024-12-06 21:49:30.559101] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:10.367 [2024-12-06 21:49:30.559675] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:10.367 [2024-12-06 21:49:30.559715] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:10.367 [2024-12-06 21:49:30.559996] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:10.367 00:27:10.367 [2024-12-06 21:49:30.560035] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:11.303 00:27:11.303 real 0m1.585s 00:27:11.303 user 0m1.301s 00:27:11.303 sys 0m0.185s 00:27:11.303 21:49:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:11.303 21:49:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.303 ************************************ 00:27:11.303 END TEST bdev_hello_world 00:27:11.303 ************************************ 00:27:11.303 21:49:31 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:11.303 21:49:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:11.303 21:49:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:11.303 21:49:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.303 ************************************ 00:27:11.303 START TEST bdev_bounds 00:27:11.303 ************************************ 00:27:11.303 21:49:31 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:11.303 21:49:31 -- bdev/blockdev.sh@288 -- # bdevio_pid=90723 00:27:11.303 21:49:31 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:11.303 Process bdevio pid: 90723 00:27:11.303 21:49:31 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 90723' 00:27:11.303 21:49:31 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:11.303 21:49:31 -- bdev/blockdev.sh@291 -- # waitforlisten 90723 00:27:11.303 21:49:31 -- common/autotest_common.sh@829 -- # '[' -z 90723 ']' 00:27:11.303 21:49:31 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.303 21:49:31 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:11.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.303 21:49:31 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.304 21:49:31 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:11.304 21:49:31 -- common/autotest_common.sh@10 -- # set +x 00:27:11.304 [2024-12-06 21:49:31.568002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:11.304 [2024-12-06 21:49:31.568161] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90723 ] 00:27:11.304 [2024-12-06 21:49:31.740281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:11.563 [2024-12-06 21:49:31.898288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.563 [2024-12-06 21:49:31.898401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.563 [2024-12-06 21:49:31.898420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.130 21:49:32 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:12.130 21:49:32 -- common/autotest_common.sh@862 -- # return 0 00:27:12.130 21:49:32 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:12.130 I/O targets: 00:27:12.130 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:27:12.130 00:27:12.130 00:27:12.130 CUnit - A unit testing framework for C - Version 2.1-3 00:27:12.130 http://cunit.sourceforge.net/ 00:27:12.130 00:27:12.130 00:27:12.130 Suite: bdevio tests on: Nvme0n1 00:27:12.130 Test: blockdev write read block ...passed 00:27:12.130 Test: blockdev write zeroes read block ...passed 00:27:12.130 Test: blockdev write zeroes read no split ...passed 00:27:12.130 Test: blockdev write zeroes read split ...passed 00:27:12.130 Test: blockdev write zeroes read split partial ...passed 00:27:12.130 Test: blockdev reset ...[2024-12-06 21:49:32.626411] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:12.389 [2024-12-06 21:49:32.630528] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:12.389 passed 00:27:12.389 Test: blockdev write read 8 blocks ...passed 00:27:12.389 Test: blockdev write read size > 128k ...passed 00:27:12.389 Test: blockdev write read invalid size ...passed 00:27:12.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:12.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:12.389 Test: blockdev write read max offset ...passed 00:27:12.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:12.389 Test: blockdev writev readv 8 blocks ...passed 00:27:12.389 Test: blockdev writev readv 30 x 1block ...passed 00:27:12.389 Test: blockdev writev readv block ...passed 00:27:12.389 Test: blockdev writev readv size > 128k ...passed 00:27:12.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:12.389 Test: blockdev comparev and writev ...[2024-12-06 21:49:32.639893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29680d000 len:0x1000 00:27:12.389 [2024-12-06 21:49:32.639961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:12.389 passed 00:27:12.389 Test: blockdev nvme passthru rw ...passed 00:27:12.389 Test: blockdev nvme passthru vendor specific ...[2024-12-06 21:49:32.641240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:27:12.389 [2024-12-06 21:49:32.641283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:27:12.389 passed 00:27:12.389 Test: blockdev nvme admin passthru ...passed 00:27:12.389 Test: blockdev copy ...passed 00:27:12.389 00:27:12.389 Run Summary: Type Total Ran Passed Failed Inactive 00:27:12.389 suites 1 1 n/a 0 0 00:27:12.389 tests 23 23 23 0 0 00:27:12.389 asserts 152 152 152 0 n/a 00:27:12.389 00:27:12.389 Elapsed time = 0.191 seconds 00:27:12.389 0 00:27:12.389 21:49:32 -- bdev/blockdev.sh@293 -- # killprocess 90723 00:27:12.389 21:49:32 -- common/autotest_common.sh@936 -- # '[' -z 90723 ']' 00:27:12.389 21:49:32 -- common/autotest_common.sh@940 -- # kill -0 90723 00:27:12.389 21:49:32 -- common/autotest_common.sh@941 -- # uname 00:27:12.389 21:49:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:12.389 21:49:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90723 00:27:12.389 21:49:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:12.389 killing process with pid 90723 00:27:12.389 21:49:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:12.389 21:49:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90723' 00:27:12.389 21:49:32 -- common/autotest_common.sh@955 -- # kill 90723 00:27:12.389 21:49:32 -- common/autotest_common.sh@960 -- # wait 90723 00:27:13.326 21:49:33 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:13.326 00:27:13.326 real 0m2.041s 00:27:13.326 user 0m4.794s 00:27:13.326 sys 0m0.331s 00:27:13.326 ************************************ 00:27:13.326 END TEST bdev_bounds 00:27:13.326 ************************************ 00:27:13.326 21:49:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:13.326 21:49:33 -- common/autotest_common.sh@10 -- # set +x 00:27:13.326 21:49:33 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:13.326 21:49:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:13.327 21:49:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:13.327 21:49:33 -- common/autotest_common.sh@10 -- # set +x 00:27:13.327 ************************************ 00:27:13.327 START TEST bdev_nbd 00:27:13.327 ************************************ 00:27:13.327 21:49:33 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:27:13.327 21:49:33 -- bdev/blockdev.sh@298 -- # uname -s 00:27:13.327 21:49:33 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:13.327 21:49:33 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:13.327 21:49:33 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:13.327 21:49:33 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1') 00:27:13.327 21:49:33 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:13.327 21:49:33 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:27:13.327 21:49:33 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:13.327 21:49:33 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:13.327 21:49:33 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:13.327 21:49:33 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:27:13.327 21:49:33 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:27:13.327 21:49:33 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:13.327 21:49:33 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1') 00:27:13.327 21:49:33 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:13.327 21:49:33 -- bdev/blockdev.sh@316 -- # nbd_pid=90777 00:27:13.327 21:49:33 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:13.327 21:49:33 -- bdev/blockdev.sh@318 -- # waitforlisten 90777 /var/tmp/spdk-nbd.sock 00:27:13.327 21:49:33 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:13.327 21:49:33 -- common/autotest_common.sh@829 -- # '[' -z 90777 ']' 00:27:13.327 21:49:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:13.327 21:49:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:13.327 21:49:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:13.327 21:49:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.327 21:49:33 -- common/autotest_common.sh@10 -- # set +x 00:27:13.327 [2024-12-06 21:49:33.657654] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:13.327 [2024-12-06 21:49:33.657807] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.587 [2024-12-06 21:49:33.827856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.587 [2024-12-06 21:49:33.976747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.155 21:49:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:14.155 21:49:34 -- common/autotest_common.sh@862 -- # return 0 00:27:14.155 21:49:34 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1') 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1') 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@24 -- # local i 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:14.155 21:49:34 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:14.414 21:49:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:14.414 21:49:34 -- common/autotest_common.sh@867 -- # local i 00:27:14.414 21:49:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:14.414 21:49:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:14.414 21:49:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:14.414 21:49:34 -- common/autotest_common.sh@871 -- # break 00:27:14.414 21:49:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:14.414 21:49:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:14.414 21:49:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:14.414 1+0 records in 00:27:14.414 1+0 records out 00:27:14.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566281 s, 7.2 MB/s 00:27:14.414 21:49:34 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.414 21:49:34 -- common/autotest_common.sh@884 -- # size=4096 00:27:14.414 21:49:34 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:14.414 21:49:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:14.414 21:49:34 -- common/autotest_common.sh@887 -- # return 0 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:27:14.414 21:49:34 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:14.673 { 00:27:14.673 "nbd_device": "/dev/nbd0", 00:27:14.673 "bdev_name": "Nvme0n1" 00:27:14.673 } 00:27:14.673 ]' 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:14.673 { 00:27:14.673 "nbd_device": "/dev/nbd0", 00:27:14.673 "bdev_name": "Nvme0n1" 00:27:14.673 } 00:27:14.673 ]' 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@51 -- # local i 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:14.673 21:49:35 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@41 -- # break 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@45 -- # return 0 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:14.932 21:49:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@65 -- # true 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@65 -- # count=0 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@122 -- # count=0 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@127 -- # return 0 00:27:15.190 21:49:35 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1') 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1') 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@12 -- # local i 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:15.190 21:49:35 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:27:15.448 /dev/nbd0 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:15.448 21:49:35 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:15.448 21:49:35 -- common/autotest_common.sh@867 -- # local i 00:27:15.448 21:49:35 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:15.448 21:49:35 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:15.448 21:49:35 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:15.448 21:49:35 -- common/autotest_common.sh@871 -- # break 00:27:15.448 21:49:35 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:15.448 21:49:35 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:15.448 21:49:35 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:15.448 1+0 records in 00:27:15.448 1+0 records out 00:27:15.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644364 s, 6.4 MB/s 00:27:15.448 21:49:35 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.448 21:49:35 -- common/autotest_common.sh@884 -- # size=4096 00:27:15.448 21:49:35 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:15.448 21:49:35 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:15.448 21:49:35 -- common/autotest_common.sh@887 -- # return 0 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.448 21:49:35 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:15.707 { 00:27:15.707 "nbd_device": "/dev/nbd0", 00:27:15.707 "bdev_name": "Nvme0n1" 00:27:15.707 } 00:27:15.707 ]' 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:15.707 { 00:27:15.707 "nbd_device": "/dev/nbd0", 00:27:15.707 "bdev_name": "Nvme0n1" 00:27:15.707 } 00:27:15.707 ]' 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@65 -- # count=1 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@66 -- # echo 1 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@95 -- # count=1 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:15.707 256+0 records in 00:27:15.707 256+0 records out 00:27:15.707 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00938752 s, 112 MB/s 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:15.707 21:49:36 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:15.965 256+0 records in 00:27:15.965 256+0 records out 00:27:15.965 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0713719 s, 14.7 MB/s 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@51 -- # local i 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@41 -- # break 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@45 -- # return 0 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:15.965 21:49:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:16.225 21:49:36 -- bdev/nbd_common.sh@65 -- # true 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@65 -- # count=0 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@104 -- # count=0 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@109 -- # return 0 00:27:16.483 21:49:36 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:16.483 21:49:36 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:16.483 malloc_lvol_verify 00:27:16.484 21:49:36 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:16.742 eed207cd-de90-4e89-8304-dfbfb1965673 00:27:16.742 21:49:37 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:17.001 25bc0108-068d-4975-aff9-66092bf36680 00:27:17.001 21:49:37 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:17.260 /dev/nbd0 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:17.260 mke2fs 1.47.0 (5-Feb-2023) 00:27:17.260 00:27:17.260 Filesystem too small for a journal 00:27:17.260 Discarding device blocks: 0/1024 done 00:27:17.260 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:17.260 00:27:17.260 Allocating group tables: 0/1 done 00:27:17.260 Writing inode tables: 0/1 done 00:27:17.260 Writing superblocks and filesystem accounting information: 0/1 done 00:27:17.260 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@51 -- # local i 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:17.260 21:49:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@41 -- # break 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@45 -- # return 0 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:17.520 21:49:37 -- bdev/nbd_common.sh@147 -- # return 0 00:27:17.520 21:49:37 -- bdev/blockdev.sh@324 -- # killprocess 90777 00:27:17.520 21:49:37 -- common/autotest_common.sh@936 -- # '[' -z 90777 ']' 00:27:17.520 21:49:37 -- common/autotest_common.sh@940 -- # kill -0 90777 00:27:17.520 21:49:37 -- common/autotest_common.sh@941 -- # uname 00:27:17.520 21:49:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:17.520 21:49:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90777 00:27:17.520 21:49:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:17.520 killing process with pid 90777 00:27:17.520 21:49:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:17.520 21:49:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90777' 00:27:17.520 21:49:37 -- common/autotest_common.sh@955 -- # kill 90777 00:27:17.520 21:49:37 -- common/autotest_common.sh@960 -- # wait 90777 00:27:18.458 21:49:38 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:18.458 00:27:18.458 real 0m5.167s 00:27:18.458 user 0m7.460s 00:27:18.458 sys 0m1.103s 00:27:18.458 ************************************ 00:27:18.458 END TEST bdev_nbd 00:27:18.458 ************************************ 00:27:18.458 21:49:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:18.458 21:49:38 -- common/autotest_common.sh@10 -- # set +x 00:27:18.458 21:49:38 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:18.458 21:49:38 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:27:18.458 skipping fio tests on NVMe due to multi-ns failures. 00:27:18.458 21:49:38 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:18.458 21:49:38 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:18.458 21:49:38 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:18.458 21:49:38 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:18.458 21:49:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:18.458 21:49:38 -- common/autotest_common.sh@10 -- # set +x 00:27:18.458 ************************************ 00:27:18.458 START TEST bdev_verify 00:27:18.458 ************************************ 00:27:18.458 21:49:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:18.458 [2024-12-06 21:49:38.871938] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:18.458 [2024-12-06 21:49:38.872105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90954 ] 00:27:18.717 [2024-12-06 21:49:39.040775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:18.717 [2024-12-06 21:49:39.202889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.717 [2024-12-06 21:49:39.202903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.286 Running I/O for 5 seconds... 00:27:24.562 00:27:24.562 Latency(us) 00:27:24.562 [2024-12-06T21:49:45.059Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:24.562 [2024-12-06T21:49:45.059Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:24.562 Verification LBA range: start 0x0 length 0xa0000 00:27:24.562 Nvme0n1 : 5.01 18121.24 70.79 0.00 0.00 7031.08 487.80 13583.83 00:27:24.562 [2024-12-06T21:49:45.059Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:27:24.562 Verification LBA range: start 0xa0000 length 0xa0000 00:27:24.562 Nvme0n1 : 5.01 18145.91 70.88 0.00 0.00 7021.89 385.40 14477.50 00:27:24.562 [2024-12-06T21:49:45.059Z] =================================================================================================================== 00:27:24.562 [2024-12-06T21:49:45.059Z] Total : 36267.15 141.67 0.00 0.00 7026.48 385.40 14477.50 00:27:31.182 00:27:31.182 real 0m12.301s 00:27:31.182 user 0m23.485s 00:27:31.182 sys 0m0.279s 00:27:31.182 21:49:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:31.182 21:49:51 -- common/autotest_common.sh@10 -- # set +x 00:27:31.182 ************************************ 00:27:31.182 END TEST bdev_verify 00:27:31.182 ************************************ 00:27:31.182 21:49:51 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:31.182 21:49:51 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:31.182 21:49:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:31.182 21:49:51 -- common/autotest_common.sh@10 -- # set +x 00:27:31.182 ************************************ 00:27:31.182 START TEST bdev_verify_big_io 00:27:31.182 ************************************ 00:27:31.182 21:49:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:27:31.182 [2024-12-06 21:49:51.227672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:31.182 [2024-12-06 21:49:51.227839] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91097 ] 00:27:31.182 [2024-12-06 21:49:51.397110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:31.182 [2024-12-06 21:49:51.552766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.182 [2024-12-06 21:49:51.552778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.441 Running I/O for 5 seconds... 00:27:36.710 00:27:36.710 Latency(us) 00:27:36.710 [2024-12-06T21:49:57.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.710 [2024-12-06T21:49:57.207Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:27:36.710 Verification LBA range: start 0x0 length 0xa000 00:27:36.710 Nvme0n1 : 5.04 1958.42 122.40 0.00 0.00 64471.13 547.37 101044.60 00:27:36.710 [2024-12-06T21:49:57.207Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:27:36.710 Verification LBA range: start 0xa000 length 0xa000 00:27:36.710 Nvme0n1 : 5.05 1900.98 118.81 0.00 0.00 66380.27 703.77 97708.22 00:27:36.710 [2024-12-06T21:49:57.208Z] =================================================================================================================== 00:27:36.711 [2024-12-06T21:49:57.208Z] Total : 3859.40 241.21 0.00 0.00 65411.68 547.37 101044.60 00:27:38.087 00:27:38.087 real 0m7.051s 00:27:38.087 user 0m13.045s 00:27:38.087 sys 0m0.228s 00:27:38.087 21:49:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:38.087 21:49:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.087 ************************************ 00:27:38.087 END TEST bdev_verify_big_io 00:27:38.087 ************************************ 00:27:38.087 21:49:58 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:38.087 21:49:58 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:38.087 21:49:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:38.087 21:49:58 -- common/autotest_common.sh@10 -- # set +x 00:27:38.087 ************************************ 00:27:38.087 START TEST bdev_write_zeroes 00:27:38.087 ************************************ 00:27:38.087 21:49:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:38.087 [2024-12-06 21:49:58.330049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:38.087 [2024-12-06 21:49:58.330222] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91185 ] 00:27:38.087 [2024-12-06 21:49:58.499172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.347 [2024-12-06 21:49:58.654003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.605 Running I/O for 1 seconds... 00:27:39.539 00:27:39.539 Latency(us) 00:27:39.539 [2024-12-06T21:50:00.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.539 [2024-12-06T21:50:00.036Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:27:39.539 Nvme0n1 : 1.00 63206.70 246.90 0.00 0.00 2020.17 889.95 6791.91 00:27:39.540 [2024-12-06T21:50:00.037Z] =================================================================================================================== 00:27:39.540 [2024-12-06T21:50:00.037Z] Total : 63206.70 246.90 0.00 0.00 2020.17 889.95 6791.91 00:27:40.477 00:27:40.477 real 0m2.632s 00:27:40.477 user 0m2.325s 00:27:40.477 sys 0m0.206s 00:27:40.477 21:50:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:40.477 21:50:00 -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 ************************************ 00:27:40.477 END TEST bdev_write_zeroes 00:27:40.477 ************************************ 00:27:40.477 21:50:00 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.477 21:50:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:40.477 21:50:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:40.477 21:50:00 -- common/autotest_common.sh@10 -- # set +x 00:27:40.477 ************************************ 00:27:40.477 START TEST bdev_json_nonenclosed 00:27:40.477 ************************************ 00:27:40.477 21:50:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:40.736 [2024-12-06 21:50:01.014614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:40.736 [2024-12-06 21:50:01.014779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91234 ] 00:27:40.736 [2024-12-06 21:50:01.183292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.995 [2024-12-06 21:50:01.335144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.995 [2024-12-06 21:50:01.335330] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:27:40.995 [2024-12-06 21:50:01.335352] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:41.254 00:27:41.254 real 0m0.710s 00:27:41.254 user 0m0.495s 00:27:41.254 sys 0m0.114s 00:27:41.254 21:50:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:41.254 21:50:01 -- common/autotest_common.sh@10 -- # set +x 00:27:41.254 ************************************ 00:27:41.254 END TEST bdev_json_nonenclosed 00:27:41.254 ************************************ 00:27:41.254 21:50:01 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:41.254 21:50:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:27:41.254 21:50:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:41.254 21:50:01 -- common/autotest_common.sh@10 -- # set +x 00:27:41.254 ************************************ 00:27:41.254 START TEST bdev_json_nonarray 00:27:41.254 ************************************ 00:27:41.254 21:50:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:27:41.513 [2024-12-06 21:50:01.758043] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:41.513 [2024-12-06 21:50:01.758182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91264 ] 00:27:41.513 [2024-12-06 21:50:01.911124] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.773 [2024-12-06 21:50:02.058048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.773 [2024-12-06 21:50:02.058238] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:27:41.773 [2024-12-06 21:50:02.058262] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:42.032 00:27:42.032 real 0m0.682s 00:27:42.032 user 0m0.484s 00:27:42.032 sys 0m0.097s 00:27:42.032 21:50:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:42.032 21:50:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.032 ************************************ 00:27:42.032 END TEST bdev_json_nonarray 00:27:42.032 ************************************ 00:27:42.032 21:50:02 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:27:42.032 21:50:02 -- bdev/blockdev.sh@809 -- # cleanup 00:27:42.032 21:50:02 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:27:42.032 21:50:02 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:42.032 21:50:02 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:27:42.032 21:50:02 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:27:42.032 00:27:42.032 real 0m35.691s 00:27:42.032 user 0m56.658s 00:27:42.032 sys 0m3.327s 00:27:42.032 21:50:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:42.032 21:50:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.032 ************************************ 00:27:42.032 END TEST blockdev_nvme 00:27:42.032 ************************************ 00:27:42.032 21:50:02 -- spdk/autotest.sh@206 -- # uname -s 00:27:42.032 21:50:02 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:27:42.032 21:50:02 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:42.032 21:50:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:42.032 21:50:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:42.032 21:50:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.032 ************************************ 00:27:42.032 START TEST blockdev_nvme_gpt 00:27:42.032 ************************************ 00:27:42.032 21:50:02 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:27:42.293 * Looking for test storage... 00:27:42.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:27:42.293 21:50:02 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:42.293 21:50:02 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:42.293 21:50:02 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:42.293 21:50:02 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:42.293 21:50:02 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:42.293 21:50:02 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:42.293 21:50:02 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:42.293 21:50:02 -- scripts/common.sh@335 -- # IFS=.-: 00:27:42.293 21:50:02 -- scripts/common.sh@335 -- # read -ra ver1 00:27:42.293 21:50:02 -- scripts/common.sh@336 -- # IFS=.-: 00:27:42.293 21:50:02 -- scripts/common.sh@336 -- # read -ra ver2 00:27:42.293 21:50:02 -- scripts/common.sh@337 -- # local 'op=<' 00:27:42.293 21:50:02 -- scripts/common.sh@339 -- # ver1_l=2 00:27:42.293 21:50:02 -- scripts/common.sh@340 -- # ver2_l=1 00:27:42.293 21:50:02 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:42.293 21:50:02 -- scripts/common.sh@343 -- # case "$op" in 00:27:42.293 21:50:02 -- scripts/common.sh@344 -- # : 1 00:27:42.293 21:50:02 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:42.293 21:50:02 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:42.293 21:50:02 -- scripts/common.sh@364 -- # decimal 1 00:27:42.293 21:50:02 -- scripts/common.sh@352 -- # local d=1 00:27:42.293 21:50:02 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:42.293 21:50:02 -- scripts/common.sh@354 -- # echo 1 00:27:42.293 21:50:02 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:42.293 21:50:02 -- scripts/common.sh@365 -- # decimal 2 00:27:42.293 21:50:02 -- scripts/common.sh@352 -- # local d=2 00:27:42.293 21:50:02 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:42.293 21:50:02 -- scripts/common.sh@354 -- # echo 2 00:27:42.293 21:50:02 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:42.293 21:50:02 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:42.293 21:50:02 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:42.293 21:50:02 -- scripts/common.sh@367 -- # return 0 00:27:42.293 21:50:02 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:42.293 21:50:02 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:42.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.293 --rc genhtml_branch_coverage=1 00:27:42.293 --rc genhtml_function_coverage=1 00:27:42.293 --rc genhtml_legend=1 00:27:42.293 --rc geninfo_all_blocks=1 00:27:42.293 --rc geninfo_unexecuted_blocks=1 00:27:42.293 00:27:42.293 ' 00:27:42.293 21:50:02 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:42.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.293 --rc genhtml_branch_coverage=1 00:27:42.293 --rc genhtml_function_coverage=1 00:27:42.293 --rc genhtml_legend=1 00:27:42.293 --rc geninfo_all_blocks=1 00:27:42.293 --rc geninfo_unexecuted_blocks=1 00:27:42.293 00:27:42.293 ' 00:27:42.293 21:50:02 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:42.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.293 --rc genhtml_branch_coverage=1 00:27:42.293 --rc genhtml_function_coverage=1 00:27:42.293 --rc genhtml_legend=1 00:27:42.293 --rc geninfo_all_blocks=1 00:27:42.293 --rc geninfo_unexecuted_blocks=1 00:27:42.293 00:27:42.293 ' 00:27:42.293 21:50:02 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:42.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:42.293 --rc genhtml_branch_coverage=1 00:27:42.293 --rc genhtml_function_coverage=1 00:27:42.293 --rc genhtml_legend=1 00:27:42.293 --rc geninfo_all_blocks=1 00:27:42.293 --rc geninfo_unexecuted_blocks=1 00:27:42.293 00:27:42.293 ' 00:27:42.293 21:50:02 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:27:42.293 21:50:02 -- bdev/nbd_common.sh@6 -- # set -e 00:27:42.293 21:50:02 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:27:42.293 21:50:02 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:42.293 21:50:02 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:27:42.293 21:50:02 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:27:42.293 21:50:02 -- bdev/blockdev.sh@18 -- # : 00:27:42.293 21:50:02 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:27:42.293 21:50:02 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:27:42.293 21:50:02 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:27:42.293 21:50:02 -- bdev/blockdev.sh@672 -- # uname -s 00:27:42.293 21:50:02 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:27:42.293 21:50:02 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:27:42.293 21:50:02 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:27:42.293 21:50:02 -- bdev/blockdev.sh@681 -- # crypto_device= 00:27:42.293 21:50:02 -- bdev/blockdev.sh@682 -- # dek= 00:27:42.293 21:50:02 -- bdev/blockdev.sh@683 -- # env_ctx= 00:27:42.294 21:50:02 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:27:42.294 21:50:02 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:27:42.294 21:50:02 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:27:42.294 21:50:02 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:27:42.294 21:50:02 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:27:42.294 21:50:02 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=91350 00:27:42.294 21:50:02 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:27:42.294 21:50:02 -- bdev/blockdev.sh@47 -- # waitforlisten 91350 00:27:42.294 21:50:02 -- common/autotest_common.sh@829 -- # '[' -z 91350 ']' 00:27:42.294 21:50:02 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:27:42.294 21:50:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.294 21:50:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.294 21:50:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.294 21:50:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.294 21:50:02 -- common/autotest_common.sh@10 -- # set +x 00:27:42.294 [2024-12-06 21:50:02.725973] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:42.294 [2024-12-06 21:50:02.726142] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91350 ] 00:27:42.553 [2024-12-06 21:50:02.894694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.553 [2024-12-06 21:50:03.040495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:42.553 [2024-12-06 21:50:03.040727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.934 21:50:04 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.934 21:50:04 -- common/autotest_common.sh@862 -- # return 0 00:27:43.934 21:50:04 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:27:43.934 21:50:04 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:27:43.934 21:50:04 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:44.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:27:44.193 Waiting for block devices as requested 00:27:44.193 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:27:44.452 21:50:04 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:27:44.452 21:50:04 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:27:44.452 21:50:04 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:27:44.452 21:50:04 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:27:44.452 21:50:04 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:27:44.452 21:50:04 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:27:44.452 21:50:04 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:27:44.452 21:50:04 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.452 21:50:04 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:27:44.452 21:50:04 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme0/nvme0n1') 00:27:44.452 21:50:04 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:27:44.452 21:50:04 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:27:44.452 21:50:04 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:27:44.452 21:50:04 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:27:44.452 21:50:04 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:27:44.452 21:50:04 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:27:44.452 21:50:04 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:27:44.452 BYT; 00:27:44.452 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:27:44.452 21:50:04 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:27:44.452 BYT; 00:27:44.452 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:27:44.452 21:50:04 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:27:44.452 21:50:04 -- bdev/blockdev.sh@114 -- # break 00:27:44.452 21:50:04 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:27:44.452 21:50:04 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:27:44.452 21:50:04 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:27:44.452 21:50:04 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:27:44.452 21:50:04 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:27:44.452 21:50:04 -- scripts/common.sh@410 -- # local spdk_guid 00:27:44.452 21:50:04 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:44.452 21:50:04 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:44.453 21:50:04 -- scripts/common.sh@415 -- # IFS='()' 00:27:44.453 21:50:04 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:27:44.453 21:50:04 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:44.453 21:50:04 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:27:44.453 21:50:04 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:44.453 21:50:04 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:44.453 21:50:04 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:27:44.453 21:50:04 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:27:44.453 21:50:04 -- scripts/common.sh@422 -- # local spdk_guid 00:27:44.453 21:50:04 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:27:44.453 21:50:04 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:44.453 21:50:04 -- scripts/common.sh@427 -- # IFS='()' 00:27:44.453 21:50:04 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:27:44.453 21:50:04 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:27:44.453 21:50:04 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:27:44.453 21:50:04 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:44.453 21:50:04 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:44.453 21:50:04 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:27:44.453 21:50:04 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:27:45.463 The operation has completed successfully. 00:27:45.463 21:50:05 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:27:46.838 The operation has completed successfully. 00:27:46.838 21:50:06 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:46.838 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:27:47.095 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:27:47.660 21:50:07 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:27:47.660 21:50:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:07 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 [] 00:27:47.660 21:50:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:07 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:27:47.660 21:50:07 -- bdev/blockdev.sh@79 -- # local json 00:27:47.660 21:50:07 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:27:47.660 21:50:07 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:47.660 21:50:07 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:27:47.660 21:50:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:07 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:27:47.660 21:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@738 -- # cat 00:27:47.660 21:50:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:27:47.660 21:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:27:47.660 21:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:27:47.660 21:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:27:47.660 21:50:08 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:27:47.660 21:50:08 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:27:47.660 21:50:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.660 21:50:08 -- common/autotest_common.sh@10 -- # set +x 00:27:47.660 21:50:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.660 21:50:08 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:27:47.660 21:50:08 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:27:47.660 21:50:08 -- bdev/blockdev.sh@747 -- # jq -r .name 00:27:47.919 21:50:08 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:27:47.919 21:50:08 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:27:47.919 21:50:08 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:27:47.919 21:50:08 -- bdev/blockdev.sh@752 -- # killprocess 91350 00:27:47.919 21:50:08 -- common/autotest_common.sh@936 -- # '[' -z 91350 ']' 00:27:47.919 21:50:08 -- common/autotest_common.sh@940 -- # kill -0 91350 00:27:47.919 21:50:08 -- common/autotest_common.sh@941 -- # uname 00:27:47.919 21:50:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:47.919 21:50:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91350 00:27:47.919 21:50:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:47.919 killing process with pid 91350 00:27:47.919 21:50:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:47.919 21:50:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91350' 00:27:47.919 21:50:08 -- common/autotest_common.sh@955 -- # kill 91350 00:27:47.919 21:50:08 -- common/autotest_common.sh@960 -- # wait 91350 00:27:49.821 21:50:09 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:49.821 21:50:09 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:49.821 21:50:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:27:49.821 21:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:49.821 21:50:09 -- common/autotest_common.sh@10 -- # set +x 00:27:49.821 ************************************ 00:27:49.821 START TEST bdev_hello_world 00:27:49.821 ************************************ 00:27:49.821 21:50:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:27:49.821 [2024-12-06 21:50:09.952297] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:49.821 [2024-12-06 21:50:09.952457] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91747 ] 00:27:49.821 [2024-12-06 21:50:10.119963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.821 [2024-12-06 21:50:10.271391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:50.388 [2024-12-06 21:50:10.610293] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:27:50.388 [2024-12-06 21:50:10.610358] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:27:50.388 [2024-12-06 21:50:10.610396] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:27:50.388 [2024-12-06 21:50:10.612979] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:27:50.388 [2024-12-06 21:50:10.613571] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:27:50.388 [2024-12-06 21:50:10.613612] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:27:50.388 [2024-12-06 21:50:10.613865] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:27:50.388 00:27:50.388 [2024-12-06 21:50:10.613906] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:27:51.324 00:27:51.324 real 0m1.628s 00:27:51.324 user 0m1.319s 00:27:51.324 sys 0m0.208s 00:27:51.324 21:50:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:51.324 21:50:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.324 ************************************ 00:27:51.324 END TEST bdev_hello_world 00:27:51.324 ************************************ 00:27:51.324 21:50:11 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:27:51.324 21:50:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:51.324 21:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:51.324 21:50:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.324 ************************************ 00:27:51.324 START TEST bdev_bounds 00:27:51.324 ************************************ 00:27:51.324 21:50:11 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:27:51.324 21:50:11 -- bdev/blockdev.sh@288 -- # bdevio_pid=91783 00:27:51.324 21:50:11 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:27:51.324 21:50:11 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 91783' 00:27:51.324 Process bdevio pid: 91783 00:27:51.324 21:50:11 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:51.324 21:50:11 -- bdev/blockdev.sh@291 -- # waitforlisten 91783 00:27:51.324 21:50:11 -- common/autotest_common.sh@829 -- # '[' -z 91783 ']' 00:27:51.324 21:50:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.324 21:50:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.324 21:50:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.324 21:50:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.324 21:50:11 -- common/autotest_common.sh@10 -- # set +x 00:27:51.324 [2024-12-06 21:50:11.636508] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:51.324 [2024-12-06 21:50:11.636729] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91783 ] 00:27:51.324 [2024-12-06 21:50:11.804371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:51.583 [2024-12-06 21:50:11.959977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.583 [2024-12-06 21:50:11.960044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.583 [2024-12-06 21:50:11.960053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.152 21:50:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.152 21:50:12 -- common/autotest_common.sh@862 -- # return 0 00:27:52.152 21:50:12 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:27:52.152 I/O targets: 00:27:52.152 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:27:52.152 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:27:52.152 00:27:52.152 00:27:52.152 CUnit - A unit testing framework for C - Version 2.1-3 00:27:52.152 http://cunit.sourceforge.net/ 00:27:52.152 00:27:52.152 00:27:52.152 Suite: bdevio tests on: Nvme0n1p2 00:27:52.152 Test: blockdev write read block ...passed 00:27:52.152 Test: blockdev write zeroes read block ...passed 00:27:52.152 Test: blockdev write zeroes read no split ...passed 00:27:52.152 Test: blockdev write zeroes read split ...passed 00:27:52.152 Test: blockdev write zeroes read split partial ...passed 00:27:52.152 Test: blockdev reset ...[2024-12-06 21:50:12.614654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:52.152 [2024-12-06 21:50:12.618191] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:52.152 passed 00:27:52.152 Test: blockdev write read 8 blocks ...passed 00:27:52.152 Test: blockdev write read size > 128k ...passed 00:27:52.152 Test: blockdev write read invalid size ...passed 00:27:52.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:52.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:52.152 Test: blockdev write read max offset ...passed 00:27:52.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:52.152 Test: blockdev writev readv 8 blocks ...passed 00:27:52.152 Test: blockdev writev readv 30 x 1block ...passed 00:27:52.152 Test: blockdev writev readv block ...passed 00:27:52.152 Test: blockdev writev readv size > 128k ...passed 00:27:52.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:52.152 Test: blockdev comparev and writev ...[2024-12-06 21:50:12.628885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x29460b000 len:0x1000 00:27:52.152 [2024-12-06 21:50:12.629019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:52.152 passed 00:27:52.152 Test: blockdev nvme passthru rw ...passed 00:27:52.152 Test: blockdev nvme passthru vendor specific ...passed 00:27:52.152 Test: blockdev nvme admin passthru ...passed 00:27:52.152 Test: blockdev copy ...passed 00:27:52.152 Suite: bdevio tests on: Nvme0n1p1 00:27:52.152 Test: blockdev write read block ...passed 00:27:52.153 Test: blockdev write zeroes read block ...passed 00:27:52.153 Test: blockdev write zeroes read no split ...passed 00:27:52.412 Test: blockdev write zeroes read split ...passed 00:27:52.412 Test: blockdev write zeroes read split partial ...passed 00:27:52.412 Test: blockdev reset ...[2024-12-06 21:50:12.684161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:27:52.412 [2024-12-06 21:50:12.688010] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:52.412 passed 00:27:52.412 Test: blockdev write read 8 blocks ...passed 00:27:52.412 Test: blockdev write read size > 128k ...passed 00:27:52.412 Test: blockdev write read invalid size ...passed 00:27:52.412 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:27:52.412 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:27:52.412 Test: blockdev write read max offset ...passed 00:27:52.412 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:27:52.412 Test: blockdev writev readv 8 blocks ...passed 00:27:52.412 Test: blockdev writev readv 30 x 1block ...passed 00:27:52.412 Test: blockdev writev readv block ...passed 00:27:52.412 Test: blockdev writev readv size > 128k ...passed 00:27:52.412 Test: blockdev writev readv size > 128k in two iovs ...passed 00:27:52.412 Test: blockdev comparev and writev ...[2024-12-06 21:50:12.698136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x29460d000 len:0x1000 00:27:52.412 [2024-12-06 21:50:12.698220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:27:52.412 passed 00:27:52.412 Test: blockdev nvme passthru rw ...passed 00:27:52.412 Test: blockdev nvme passthru vendor specific ...passed 00:27:52.412 Test: blockdev nvme admin passthru ...passed 00:27:52.412 Test: blockdev copy ...passed 00:27:52.412 00:27:52.412 Run Summary: Type Total Ran Passed Failed Inactive 00:27:52.412 suites 2 2 n/a 0 0 00:27:52.412 tests 46 46 46 0 0 00:27:52.412 asserts 284 284 284 0 n/a 00:27:52.412 00:27:52.412 Elapsed time = 0.377 seconds 00:27:52.412 0 00:27:52.412 21:50:12 -- bdev/blockdev.sh@293 -- # killprocess 91783 00:27:52.412 21:50:12 -- common/autotest_common.sh@936 -- # '[' -z 91783 ']' 00:27:52.412 21:50:12 -- common/autotest_common.sh@940 -- # kill -0 91783 00:27:52.412 21:50:12 -- common/autotest_common.sh@941 -- # uname 00:27:52.412 21:50:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:52.412 21:50:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91783 00:27:52.412 21:50:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:52.412 killing process with pid 91783 00:27:52.412 21:50:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:52.412 21:50:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91783' 00:27:52.412 21:50:12 -- common/autotest_common.sh@955 -- # kill 91783 00:27:52.412 21:50:12 -- common/autotest_common.sh@960 -- # wait 91783 00:27:53.350 21:50:13 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:27:53.350 00:27:53.350 real 0m2.101s 00:27:53.350 user 0m4.894s 00:27:53.350 sys 0m0.333s 00:27:53.350 21:50:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:53.350 ************************************ 00:27:53.350 21:50:13 -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 END TEST bdev_bounds 00:27:53.350 ************************************ 00:27:53.350 21:50:13 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:53.350 21:50:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:27:53.350 21:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:53.350 21:50:13 -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 ************************************ 00:27:53.350 START TEST bdev_nbd 00:27:53.350 ************************************ 00:27:53.350 21:50:13 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:27:53.350 21:50:13 -- bdev/blockdev.sh@298 -- # uname -s 00:27:53.350 21:50:13 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:27:53.350 21:50:13 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:53.350 21:50:13 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:27:53.350 21:50:13 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2') 00:27:53.350 21:50:13 -- bdev/blockdev.sh@302 -- # local bdev_all 00:27:53.350 21:50:13 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:27:53.350 21:50:13 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:27:53.350 21:50:13 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:27:53.350 21:50:13 -- bdev/blockdev.sh@309 -- # local nbd_all 00:27:53.350 21:50:13 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:27:53.350 21:50:13 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:53.350 21:50:13 -- bdev/blockdev.sh@312 -- # local nbd_list 00:27:53.350 21:50:13 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:53.350 21:50:13 -- bdev/blockdev.sh@313 -- # local bdev_list 00:27:53.350 21:50:13 -- bdev/blockdev.sh@316 -- # nbd_pid=91832 00:27:53.350 21:50:13 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:27:53.350 21:50:13 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:27:53.350 21:50:13 -- bdev/blockdev.sh@318 -- # waitforlisten 91832 /var/tmp/spdk-nbd.sock 00:27:53.350 21:50:13 -- common/autotest_common.sh@829 -- # '[' -z 91832 ']' 00:27:53.350 21:50:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:27:53.350 21:50:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:27:53.350 21:50:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:27:53.350 21:50:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.350 21:50:13 -- common/autotest_common.sh@10 -- # set +x 00:27:53.350 [2024-12-06 21:50:13.798727] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:53.350 [2024-12-06 21:50:13.798884] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.609 [2024-12-06 21:50:13.970142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.868 [2024-12-06 21:50:14.122248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.437 21:50:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.437 21:50:14 -- common/autotest_common.sh@862 -- # return 0 00:27:54.437 21:50:14 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@24 -- # local i 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:27:54.437 21:50:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:54.437 21:50:14 -- common/autotest_common.sh@867 -- # local i 00:27:54.437 21:50:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:54.437 21:50:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:54.437 21:50:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:54.437 21:50:14 -- common/autotest_common.sh@871 -- # break 00:27:54.437 21:50:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:54.437 21:50:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:54.437 21:50:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.437 1+0 records in 00:27:54.437 1+0 records out 00:27:54.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000677257 s, 6.0 MB/s 00:27:54.437 21:50:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.437 21:50:14 -- common/autotest_common.sh@884 -- # size=4096 00:27:54.437 21:50:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.437 21:50:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:54.437 21:50:14 -- common/autotest_common.sh@887 -- # return 0 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:54.437 21:50:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:27:54.697 21:50:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:27:54.697 21:50:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:27:54.697 21:50:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:27:54.697 21:50:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:54.697 21:50:15 -- common/autotest_common.sh@867 -- # local i 00:27:54.697 21:50:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:54.697 21:50:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:54.697 21:50:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:54.697 21:50:15 -- common/autotest_common.sh@871 -- # break 00:27:54.697 21:50:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:54.697 21:50:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:54.697 21:50:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:54.697 1+0 records in 00:27:54.697 1+0 records out 00:27:54.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486254 s, 8.4 MB/s 00:27:54.698 21:50:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.698 21:50:15 -- common/autotest_common.sh@884 -- # size=4096 00:27:54.698 21:50:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:54.698 21:50:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:54.698 21:50:15 -- common/autotest_common.sh@887 -- # return 0 00:27:54.698 21:50:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:27:54.698 21:50:15 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:27:54.698 21:50:15 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:54.956 21:50:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:27:54.956 { 00:27:54.956 "nbd_device": "/dev/nbd0", 00:27:54.956 "bdev_name": "Nvme0n1p1" 00:27:54.956 }, 00:27:54.956 { 00:27:54.956 "nbd_device": "/dev/nbd1", 00:27:54.956 "bdev_name": "Nvme0n1p2" 00:27:54.956 } 00:27:54.956 ]' 00:27:54.956 21:50:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:27:54.956 21:50:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:27:54.957 { 00:27:54.957 "nbd_device": "/dev/nbd0", 00:27:54.957 "bdev_name": "Nvme0n1p1" 00:27:54.957 }, 00:27:54.957 { 00:27:54.957 "nbd_device": "/dev/nbd1", 00:27:54.957 "bdev_name": "Nvme0n1p2" 00:27:54.957 } 00:27:54.957 ]' 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@51 -- # local i 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:54.957 21:50:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@41 -- # break 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@41 -- # break 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@45 -- # return 0 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.215 21:50:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@65 -- # true 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@65 -- # count=0 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@122 -- # count=0 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@127 -- # return 0 00:27:55.474 21:50:15 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2') 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@12 -- # local i 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.474 21:50:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:27:55.732 /dev/nbd0 00:27:55.732 21:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:27:55.732 21:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:27:55.732 21:50:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:27:55.732 21:50:16 -- common/autotest_common.sh@867 -- # local i 00:27:55.732 21:50:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.732 21:50:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.732 21:50:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:27:55.732 21:50:16 -- common/autotest_common.sh@871 -- # break 00:27:55.732 21:50:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.732 21:50:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.732 21:50:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.732 1+0 records in 00:27:55.732 1+0 records out 00:27:55.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464654 s, 8.8 MB/s 00:27:55.732 21:50:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.732 21:50:16 -- common/autotest_common.sh@884 -- # size=4096 00:27:55.732 21:50:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.732 21:50:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.732 21:50:16 -- common/autotest_common.sh@887 -- # return 0 00:27:55.732 21:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:55.732 21:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.732 21:50:16 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:27:55.990 /dev/nbd1 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:27:55.990 21:50:16 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:27:55.990 21:50:16 -- common/autotest_common.sh@867 -- # local i 00:27:55.990 21:50:16 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:27:55.990 21:50:16 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:27:55.990 21:50:16 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:27:55.990 21:50:16 -- common/autotest_common.sh@871 -- # break 00:27:55.990 21:50:16 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:27:55.990 21:50:16 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:27:55.990 21:50:16 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:27:55.990 1+0 records in 00:27:55.990 1+0 records out 00:27:55.990 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000933254 s, 4.4 MB/s 00:27:55.990 21:50:16 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.990 21:50:16 -- common/autotest_common.sh@884 -- # size=4096 00:27:55.990 21:50:16 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:27:55.990 21:50:16 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:27:55.990 21:50:16 -- common/autotest_common.sh@887 -- # return 0 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:55.990 21:50:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:27:56.249 { 00:27:56.249 "nbd_device": "/dev/nbd0", 00:27:56.249 "bdev_name": "Nvme0n1p1" 00:27:56.249 }, 00:27:56.249 { 00:27:56.249 "nbd_device": "/dev/nbd1", 00:27:56.249 "bdev_name": "Nvme0n1p2" 00:27:56.249 } 00:27:56.249 ]' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:27:56.249 { 00:27:56.249 "nbd_device": "/dev/nbd0", 00:27:56.249 "bdev_name": "Nvme0n1p1" 00:27:56.249 }, 00:27:56.249 { 00:27:56.249 "nbd_device": "/dev/nbd1", 00:27:56.249 "bdev_name": "Nvme0n1p2" 00:27:56.249 } 00:27:56.249 ]' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:27:56.249 /dev/nbd1' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:27:56.249 /dev/nbd1' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@65 -- # count=2 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@95 -- # count=2 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:27:56.249 256+0 records in 00:27:56.249 256+0 records out 00:27:56.249 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00390413 s, 269 MB/s 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:56.249 21:50:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:27:56.508 256+0 records in 00:27:56.508 256+0 records out 00:27:56.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.085437 s, 12.3 MB/s 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:27:56.508 256+0 records in 00:27:56.508 256+0 records out 00:27:56.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.113016 s, 9.3 MB/s 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@51 -- # local i 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.508 21:50:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@41 -- # break 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@45 -- # return 0 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:56.767 21:50:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@41 -- # break 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@45 -- # return 0 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.027 21:50:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@65 -- # true 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@65 -- # count=0 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@104 -- # count=0 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@109 -- # return 0 00:27:57.286 21:50:17 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:27:57.286 21:50:17 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:27:57.545 malloc_lvol_verify 00:27:57.545 21:50:17 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:27:57.804 21dcc35d-5705-4a22-ad5a-8eef272a2e9c 00:27:57.804 21:50:18 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:27:57.804 de7d9661-d351-4c3f-8b5a-9639055ddbb1 00:27:57.804 21:50:18 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:27:58.064 /dev/nbd0 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:27:58.064 mke2fs 1.47.0 (5-Feb-2023) 00:27:58.064 00:27:58.064 Filesystem too small for a journal 00:27:58.064 Discarding device blocks: 0/1024 done 00:27:58.064 Creating filesystem with 1024 4k blocks and 1024 inodes 00:27:58.064 00:27:58.064 Allocating group tables: 0/1 done 00:27:58.064 Writing inode tables: 0/1 done 00:27:58.064 Writing superblocks and filesystem accounting information: 0/1 done 00:27:58.064 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@51 -- # local i 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:27:58.064 21:50:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@41 -- # break 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@45 -- # return 0 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:27:58.324 21:50:18 -- bdev/nbd_common.sh@147 -- # return 0 00:27:58.324 21:50:18 -- bdev/blockdev.sh@324 -- # killprocess 91832 00:27:58.324 21:50:18 -- common/autotest_common.sh@936 -- # '[' -z 91832 ']' 00:27:58.324 21:50:18 -- common/autotest_common.sh@940 -- # kill -0 91832 00:27:58.324 21:50:18 -- common/autotest_common.sh@941 -- # uname 00:27:58.324 21:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:58.324 21:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91832 00:27:58.324 21:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:58.324 21:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:58.324 killing process with pid 91832 00:27:58.324 21:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91832' 00:27:58.324 21:50:18 -- common/autotest_common.sh@955 -- # kill 91832 00:27:58.324 21:50:18 -- common/autotest_common.sh@960 -- # wait 91832 00:27:59.262 21:50:19 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:27:59.262 00:27:59.262 real 0m5.881s 00:27:59.262 user 0m8.454s 00:27:59.262 sys 0m1.373s 00:27:59.262 21:50:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:59.262 21:50:19 -- common/autotest_common.sh@10 -- # set +x 00:27:59.262 ************************************ 00:27:59.262 END TEST bdev_nbd 00:27:59.262 ************************************ 00:27:59.262 21:50:19 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:27:59.262 21:50:19 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:27:59.262 21:50:19 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:27:59.262 skipping fio tests on NVMe due to multi-ns failures. 00:27:59.262 21:50:19 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:27:59.262 21:50:19 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:59.262 21:50:19 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:59.262 21:50:19 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:27:59.262 21:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:59.262 21:50:19 -- common/autotest_common.sh@10 -- # set +x 00:27:59.262 ************************************ 00:27:59.262 START TEST bdev_verify 00:27:59.262 ************************************ 00:27:59.262 21:50:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:27:59.262 [2024-12-06 21:50:19.728482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:59.262 [2024-12-06 21:50:19.728664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92064 ] 00:27:59.522 [2024-12-06 21:50:19.899266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:59.781 [2024-12-06 21:50:20.063575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.781 [2024-12-06 21:50:20.063596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.040 Running I/O for 5 seconds... 00:28:05.314 00:28:05.314 Latency(us) 00:28:05.314 [2024-12-06T21:50:25.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.314 [2024-12-06T21:50:25.811Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:05.314 Verification LBA range: start 0x0 length 0x4ff80 00:28:05.314 Nvme0n1p1 : 5.01 7537.58 29.44 0.00 0.00 16936.85 1772.45 21805.61 00:28:05.314 [2024-12-06T21:50:25.811Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:05.314 Verification LBA range: start 0x4ff80 length 0x4ff80 00:28:05.314 Nvme0n1p1 : 5.01 7551.53 29.50 0.00 0.00 16907.54 1228.80 22163.08 00:28:05.314 [2024-12-06T21:50:25.811Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:05.314 Verification LBA range: start 0x0 length 0x4ff7f 00:28:05.314 Nvme0n1p2 : 5.02 7541.36 29.46 0.00 0.00 16915.01 390.98 22758.87 00:28:05.314 [2024-12-06T21:50:25.811Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:28:05.314 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:28:05.314 Nvme0n1p2 : 5.02 7549.56 29.49 0.00 0.00 16893.95 1750.11 21567.30 00:28:05.314 [2024-12-06T21:50:25.811Z] =================================================================================================================== 00:28:05.314 [2024-12-06T21:50:25.811Z] Total : 30180.03 117.89 0.00 0.00 16913.32 390.98 22758.87 00:28:09.499 00:28:09.499 real 0m9.982s 00:28:09.499 user 0m18.846s 00:28:09.499 sys 0m0.265s 00:28:09.499 21:50:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:09.499 21:50:29 -- common/autotest_common.sh@10 -- # set +x 00:28:09.499 ************************************ 00:28:09.499 END TEST bdev_verify 00:28:09.499 ************************************ 00:28:09.499 21:50:29 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:09.499 21:50:29 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:28:09.499 21:50:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:09.499 21:50:29 -- common/autotest_common.sh@10 -- # set +x 00:28:09.499 ************************************ 00:28:09.499 START TEST bdev_verify_big_io 00:28:09.499 ************************************ 00:28:09.499 21:50:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:28:09.499 [2024-12-06 21:50:29.754016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:09.499 [2024-12-06 21:50:29.754144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92167 ] 00:28:09.499 [2024-12-06 21:50:29.903897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:09.758 [2024-12-06 21:50:30.057992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.759 [2024-12-06 21:50:30.058011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.017 Running I/O for 5 seconds... 00:28:15.300 00:28:15.300 Latency(us) 00:28:15.300 [2024-12-06T21:50:35.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:15.300 [2024-12-06T21:50:35.797Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.300 Verification LBA range: start 0x0 length 0x4ff8 00:28:15.300 Nvme0n1p1 : 5.10 901.47 56.34 0.00 0.00 139993.29 18826.71 203042.44 00:28:15.300 [2024-12-06T21:50:35.797Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.300 Verification LBA range: start 0x4ff8 length 0x4ff8 00:28:15.300 Nvme0n1p1 : 5.10 945.97 59.12 0.00 0.00 133808.91 2174.60 195416.44 00:28:15.300 [2024-12-06T21:50:35.797Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:28:15.300 Verification LBA range: start 0x0 length 0x4ff7 00:28:15.300 Nvme0n1p2 : 5.11 917.83 57.36 0.00 0.00 136278.75 651.64 184930.68 00:28:15.300 [2024-12-06T21:50:35.797Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:28:15.300 Verification LBA range: start 0x4ff7 length 0x4ff7 00:28:15.300 Nvme0n1p2 : 5.10 953.56 59.60 0.00 0.00 131298.20 759.62 149660.39 00:28:15.300 [2024-12-06T21:50:35.797Z] =================================================================================================================== 00:28:15.300 [2024-12-06T21:50:35.797Z] Total : 3718.83 232.43 0.00 0.00 135274.60 651.64 203042.44 00:28:16.673 00:28:16.673 real 0m7.243s 00:28:16.673 user 0m13.459s 00:28:16.673 sys 0m0.221s 00:28:16.673 21:50:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:16.673 ************************************ 00:28:16.673 END TEST bdev_verify_big_io 00:28:16.673 ************************************ 00:28:16.673 21:50:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.673 21:50:36 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.673 21:50:36 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:16.673 21:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:16.673 21:50:36 -- common/autotest_common.sh@10 -- # set +x 00:28:16.673 ************************************ 00:28:16.673 START TEST bdev_write_zeroes 00:28:16.673 ************************************ 00:28:16.673 21:50:36 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:16.673 [2024-12-06 21:50:37.038486] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:16.673 [2024-12-06 21:50:37.038641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92265 ] 00:28:16.932 [2024-12-06 21:50:37.192688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.932 [2024-12-06 21:50:37.340335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.191 Running I/O for 1 seconds... 00:28:18.563 00:28:18.563 Latency(us) 00:28:18.563 [2024-12-06T21:50:39.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.563 [2024-12-06T21:50:39.060Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:18.563 Nvme0n1p1 : 1.01 23660.41 92.42 0.00 0.00 5397.84 2800.17 14894.55 00:28:18.563 [2024-12-06T21:50:39.060Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:28:18.563 Nvme0n1p2 : 1.01 23658.38 92.42 0.00 0.00 5390.48 2681.02 14775.39 00:28:18.563 [2024-12-06T21:50:39.060Z] =================================================================================================================== 00:28:18.563 [2024-12-06T21:50:39.060Z] Total : 47318.79 184.84 0.00 0.00 5394.16 2681.02 14894.55 00:28:19.129 00:28:19.129 real 0m2.595s 00:28:19.129 user 0m2.297s 00:28:19.129 sys 0m0.198s 00:28:19.129 21:50:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:19.129 21:50:39 -- common/autotest_common.sh@10 -- # set +x 00:28:19.129 ************************************ 00:28:19.129 END TEST bdev_write_zeroes 00:28:19.129 ************************************ 00:28:19.387 21:50:39 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:19.387 21:50:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:19.387 21:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:19.387 21:50:39 -- common/autotest_common.sh@10 -- # set +x 00:28:19.387 ************************************ 00:28:19.387 START TEST bdev_json_nonenclosed 00:28:19.387 ************************************ 00:28:19.387 21:50:39 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:19.387 [2024-12-06 21:50:39.701765] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:19.387 [2024-12-06 21:50:39.701931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92307 ] 00:28:19.387 [2024-12-06 21:50:39.868184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.646 [2024-12-06 21:50:40.024382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.646 [2024-12-06 21:50:40.024661] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:28:19.646 [2024-12-06 21:50:40.024691] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:19.903 00:28:19.904 real 0m0.719s 00:28:19.904 user 0m0.507s 00:28:19.904 sys 0m0.111s 00:28:19.904 21:50:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:19.904 21:50:40 -- common/autotest_common.sh@10 -- # set +x 00:28:19.904 ************************************ 00:28:19.904 END TEST bdev_json_nonenclosed 00:28:19.904 ************************************ 00:28:20.162 21:50:40 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.162 21:50:40 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:28:20.162 21:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:20.162 21:50:40 -- common/autotest_common.sh@10 -- # set +x 00:28:20.162 ************************************ 00:28:20.162 START TEST bdev_json_nonarray 00:28:20.162 ************************************ 00:28:20.162 21:50:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:28:20.162 [2024-12-06 21:50:40.470971] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:20.162 [2024-12-06 21:50:40.471149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92334 ] 00:28:20.162 [2024-12-06 21:50:40.641460] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.420 [2024-12-06 21:50:40.790805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.420 [2024-12-06 21:50:40.790985] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:28:20.420 [2024-12-06 21:50:40.791008] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:20.681 00:28:20.681 real 0m0.729s 00:28:20.681 user 0m0.512s 00:28:20.681 sys 0m0.115s 00:28:20.681 21:50:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:20.681 21:50:41 -- common/autotest_common.sh@10 -- # set +x 00:28:20.681 ************************************ 00:28:20.681 END TEST bdev_json_nonarray 00:28:20.681 ************************************ 00:28:20.977 21:50:41 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:28:20.977 21:50:41 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:28:20.977 21:50:41 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:28:20.977 21:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:20.977 21:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:20.977 21:50:41 -- common/autotest_common.sh@10 -- # set +x 00:28:20.977 ************************************ 00:28:20.977 START TEST bdev_gpt_uuid 00:28:20.977 ************************************ 00:28:20.977 21:50:41 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:28:20.977 21:50:41 -- bdev/blockdev.sh@612 -- # local bdev 00:28:20.978 21:50:41 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:28:20.978 21:50:41 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=92365 00:28:20.978 21:50:41 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:20.978 21:50:41 -- bdev/blockdev.sh@47 -- # waitforlisten 92365 00:28:20.978 21:50:41 -- common/autotest_common.sh@829 -- # '[' -z 92365 ']' 00:28:20.978 21:50:41 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:20.978 21:50:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.978 21:50:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.978 21:50:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.978 21:50:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.978 21:50:41 -- common/autotest_common.sh@10 -- # set +x 00:28:20.978 [2024-12-06 21:50:41.263321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:20.978 [2024-12-06 21:50:41.263523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92365 ] 00:28:20.978 [2024-12-06 21:50:41.438004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.265 [2024-12-06 21:50:41.636378] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:21.265 [2024-12-06 21:50:41.636594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.635 21:50:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.635 21:50:42 -- common/autotest_common.sh@862 -- # return 0 00:28:22.635 21:50:42 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:22.635 21:50:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.635 21:50:42 -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 Some configs were skipped because the RPC state that can call them passed over. 00:28:22.635 21:50:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:28:22.635 21:50:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.635 21:50:43 -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 21:50:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:28:22.635 21:50:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.635 21:50:43 -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 21:50:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@619 -- # bdev='[ 00:28:22.635 { 00:28:22.635 "name": "Nvme0n1p1", 00:28:22.635 "aliases": [ 00:28:22.635 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:28:22.635 ], 00:28:22.635 "product_name": "GPT Disk", 00:28:22.635 "block_size": 4096, 00:28:22.635 "num_blocks": 655104, 00:28:22.635 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:22.635 "assigned_rate_limits": { 00:28:22.635 "rw_ios_per_sec": 0, 00:28:22.635 "rw_mbytes_per_sec": 0, 00:28:22.635 "r_mbytes_per_sec": 0, 00:28:22.635 "w_mbytes_per_sec": 0 00:28:22.635 }, 00:28:22.635 "claimed": false, 00:28:22.635 "zoned": false, 00:28:22.635 "supported_io_types": { 00:28:22.635 "read": true, 00:28:22.635 "write": true, 00:28:22.635 "unmap": true, 00:28:22.635 "write_zeroes": true, 00:28:22.635 "flush": true, 00:28:22.635 "reset": true, 00:28:22.635 "compare": true, 00:28:22.635 "compare_and_write": false, 00:28:22.635 "abort": true, 00:28:22.635 "nvme_admin": false, 00:28:22.635 "nvme_io": false 00:28:22.635 }, 00:28:22.635 "driver_specific": { 00:28:22.635 "gpt": { 00:28:22.635 "base_bdev": "Nvme0n1", 00:28:22.635 "offset_blocks": 256, 00:28:22.635 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:28:22.635 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:28:22.635 "partition_name": "SPDK_TEST_first" 00:28:22.635 } 00:28:22.635 } 00:28:22.635 } 00:28:22.635 ]' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@620 -- # jq -r length 00:28:22.635 21:50:43 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:28:22.635 21:50:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.635 21:50:43 -- common/autotest_common.sh@10 -- # set +x 00:28:22.635 21:50:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@624 -- # bdev='[ 00:28:22.635 { 00:28:22.635 "name": "Nvme0n1p2", 00:28:22.635 "aliases": [ 00:28:22.635 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:28:22.635 ], 00:28:22.635 "product_name": "GPT Disk", 00:28:22.635 "block_size": 4096, 00:28:22.635 "num_blocks": 655103, 00:28:22.635 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:22.635 "assigned_rate_limits": { 00:28:22.635 "rw_ios_per_sec": 0, 00:28:22.635 "rw_mbytes_per_sec": 0, 00:28:22.635 "r_mbytes_per_sec": 0, 00:28:22.635 "w_mbytes_per_sec": 0 00:28:22.635 }, 00:28:22.635 "claimed": false, 00:28:22.635 "zoned": false, 00:28:22.635 "supported_io_types": { 00:28:22.635 "read": true, 00:28:22.635 "write": true, 00:28:22.635 "unmap": true, 00:28:22.635 "write_zeroes": true, 00:28:22.635 "flush": true, 00:28:22.635 "reset": true, 00:28:22.635 "compare": true, 00:28:22.635 "compare_and_write": false, 00:28:22.635 "abort": true, 00:28:22.635 "nvme_admin": false, 00:28:22.635 "nvme_io": false 00:28:22.635 }, 00:28:22.635 "driver_specific": { 00:28:22.635 "gpt": { 00:28:22.635 "base_bdev": "Nvme0n1", 00:28:22.635 "offset_blocks": 655360, 00:28:22.635 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:28:22.635 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:28:22.635 "partition_name": "SPDK_TEST_second" 00:28:22.635 } 00:28:22.635 } 00:28:22.635 } 00:28:22.635 ]' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@625 -- # jq -r length 00:28:22.635 21:50:43 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:28:22.635 21:50:43 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:28:22.635 21:50:43 -- bdev/blockdev.sh@629 -- # killprocess 92365 00:28:22.635 21:50:43 -- common/autotest_common.sh@936 -- # '[' -z 92365 ']' 00:28:22.635 21:50:43 -- common/autotest_common.sh@940 -- # kill -0 92365 00:28:22.635 21:50:43 -- common/autotest_common.sh@941 -- # uname 00:28:22.635 21:50:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:22.635 21:50:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92365 00:28:22.897 21:50:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:22.897 21:50:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:22.897 killing process with pid 92365 00:28:22.897 21:50:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92365' 00:28:22.897 21:50:43 -- common/autotest_common.sh@955 -- # kill 92365 00:28:22.897 21:50:43 -- common/autotest_common.sh@960 -- # wait 92365 00:28:24.797 00:28:24.797 real 0m3.621s 00:28:24.797 user 0m3.864s 00:28:24.797 sys 0m0.442s 00:28:24.797 21:50:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:24.797 21:50:44 -- common/autotest_common.sh@10 -- # set +x 00:28:24.797 ************************************ 00:28:24.797 END TEST bdev_gpt_uuid 00:28:24.797 ************************************ 00:28:24.797 21:50:44 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:28:24.797 21:50:44 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:28:24.797 21:50:44 -- bdev/blockdev.sh@809 -- # cleanup 00:28:24.797 21:50:44 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:28:24.797 21:50:44 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:24.797 21:50:44 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:28:24.797 21:50:44 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:28:24.797 21:50:44 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:28:24.797 21:50:44 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:24.797 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:24.797 Waiting for block devices as requested 00:28:24.797 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:24.797 21:50:45 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:28:24.797 21:50:45 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:28:25.361 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:28:25.361 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:28:25.361 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:28:25.361 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:28:25.361 21:50:45 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:28:25.361 00:28:25.361 real 0m43.055s 00:28:25.361 user 1m2.323s 00:28:25.361 sys 0m5.627s 00:28:25.361 21:50:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:25.361 21:50:45 -- common/autotest_common.sh@10 -- # set +x 00:28:25.361 ************************************ 00:28:25.361 END TEST blockdev_nvme_gpt 00:28:25.361 ************************************ 00:28:25.361 21:50:45 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:25.361 21:50:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:25.361 21:50:45 -- common/autotest_common.sh@10 -- # set +x 00:28:25.361 ************************************ 00:28:25.361 START TEST nvme 00:28:25.361 ************************************ 00:28:25.361 21:50:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:28:25.361 * Looking for test storage... 00:28:25.361 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:28:25.361 21:50:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:25.361 21:50:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:25.361 21:50:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:25.361 21:50:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:25.361 21:50:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:25.361 21:50:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:25.361 21:50:45 -- scripts/common.sh@335 -- # IFS=.-: 00:28:25.361 21:50:45 -- scripts/common.sh@335 -- # read -ra ver1 00:28:25.361 21:50:45 -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.361 21:50:45 -- scripts/common.sh@336 -- # read -ra ver2 00:28:25.361 21:50:45 -- scripts/common.sh@337 -- # local 'op=<' 00:28:25.361 21:50:45 -- scripts/common.sh@339 -- # ver1_l=2 00:28:25.361 21:50:45 -- scripts/common.sh@340 -- # ver2_l=1 00:28:25.361 21:50:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:25.361 21:50:45 -- scripts/common.sh@343 -- # case "$op" in 00:28:25.361 21:50:45 -- scripts/common.sh@344 -- # : 1 00:28:25.361 21:50:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:25.361 21:50:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.361 21:50:45 -- scripts/common.sh@364 -- # decimal 1 00:28:25.361 21:50:45 -- scripts/common.sh@352 -- # local d=1 00:28:25.361 21:50:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.361 21:50:45 -- scripts/common.sh@354 -- # echo 1 00:28:25.361 21:50:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:25.361 21:50:45 -- scripts/common.sh@365 -- # decimal 2 00:28:25.361 21:50:45 -- scripts/common.sh@352 -- # local d=2 00:28:25.361 21:50:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.361 21:50:45 -- scripts/common.sh@354 -- # echo 2 00:28:25.361 21:50:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:25.361 21:50:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:25.361 21:50:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:25.361 21:50:45 -- scripts/common.sh@367 -- # return 0 00:28:25.361 21:50:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.361 --rc genhtml_branch_coverage=1 00:28:25.361 --rc genhtml_function_coverage=1 00:28:25.361 --rc genhtml_legend=1 00:28:25.361 --rc geninfo_all_blocks=1 00:28:25.361 --rc geninfo_unexecuted_blocks=1 00:28:25.361 00:28:25.361 ' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.361 --rc genhtml_branch_coverage=1 00:28:25.361 --rc genhtml_function_coverage=1 00:28:25.361 --rc genhtml_legend=1 00:28:25.361 --rc geninfo_all_blocks=1 00:28:25.361 --rc geninfo_unexecuted_blocks=1 00:28:25.361 00:28:25.361 ' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.361 --rc genhtml_branch_coverage=1 00:28:25.361 --rc genhtml_function_coverage=1 00:28:25.361 --rc genhtml_legend=1 00:28:25.361 --rc geninfo_all_blocks=1 00:28:25.361 --rc geninfo_unexecuted_blocks=1 00:28:25.361 00:28:25.361 ' 00:28:25.361 21:50:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:25.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.361 --rc genhtml_branch_coverage=1 00:28:25.361 --rc genhtml_function_coverage=1 00:28:25.361 --rc genhtml_legend=1 00:28:25.361 --rc geninfo_all_blocks=1 00:28:25.361 --rc geninfo_unexecuted_blocks=1 00:28:25.361 00:28:25.361 ' 00:28:25.361 21:50:45 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:25.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:28:25.926 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:28:26.492 21:50:46 -- nvme/nvme.sh@79 -- # uname 00:28:26.492 21:50:46 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:28:26.492 21:50:46 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:28:26.492 21:50:46 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:28:26.492 21:50:46 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:28:26.492 21:50:46 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:28:26.492 21:50:46 -- common/autotest_common.sh@1055 -- # echo 0 00:28:26.492 21:50:46 -- common/autotest_common.sh@1057 -- # stubpid=92742 00:28:26.492 Waiting for stub to ready for secondary processes... 00:28:26.492 21:50:46 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:28:26.492 21:50:46 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:26.492 21:50:46 -- common/autotest_common.sh@1061 -- # [[ -e /proc/92742 ]] 00:28:26.492 21:50:46 -- common/autotest_common.sh@1062 -- # sleep 1s 00:28:26.492 21:50:46 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:28:26.492 [2024-12-06 21:50:46.903072] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:26.492 [2024-12-06 21:50:46.903230] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.425 [2024-12-06 21:50:47.672956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:27.425 21:50:47 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:27.425 21:50:47 -- common/autotest_common.sh@1061 -- # [[ -e /proc/92742 ]] 00:28:27.425 21:50:47 -- common/autotest_common.sh@1062 -- # sleep 1s 00:28:27.425 [2024-12-06 21:50:47.888093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.425 [2024-12-06 21:50:47.888249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.425 [2024-12-06 21:50:47.888273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.425 [2024-12-06 21:50:47.902618] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:28:27.425 [2024-12-06 21:50:47.915024] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:28:27.425 [2024-12-06 21:50:47.915266] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:28:28.797 21:50:48 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:28:28.797 done. 00:28:28.797 21:50:48 -- common/autotest_common.sh@1064 -- # echo done. 00:28:28.797 21:50:48 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:28.797 21:50:48 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:28:28.797 21:50:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:28.797 21:50:48 -- common/autotest_common.sh@10 -- # set +x 00:28:28.797 ************************************ 00:28:28.797 START TEST nvme_reset 00:28:28.797 ************************************ 00:28:28.797 21:50:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:28:28.797 Initializing NVMe Controllers 00:28:28.797 Skipping QEMU NVMe SSD at 0000:00:06.0 00:28:28.797 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:28:28.797 00:28:28.797 real 0m0.258s 00:28:28.797 user 0m0.096s 00:28:28.797 sys 0m0.122s 00:28:28.797 21:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:28.797 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:28:28.797 ************************************ 00:28:28.797 END TEST nvme_reset 00:28:28.797 ************************************ 00:28:28.797 21:50:49 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:28:28.797 21:50:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:28.797 21:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:28.797 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:28:28.797 ************************************ 00:28:28.797 START TEST nvme_identify 00:28:28.797 ************************************ 00:28:28.797 21:50:49 -- common/autotest_common.sh@1114 -- # nvme_identify 00:28:28.797 21:50:49 -- nvme/nvme.sh@12 -- # bdfs=() 00:28:28.797 21:50:49 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:28:28.797 21:50:49 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:28:28.797 21:50:49 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:28:28.797 21:50:49 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:28.797 21:50:49 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:28.797 21:50:49 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:28.797 21:50:49 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:28.797 21:50:49 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:28.797 21:50:49 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:28.797 21:50:49 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:28.797 21:50:49 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:28:29.055 [2024-12-06 21:50:49.495103] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 92776 terminated unexpected 00:28:29.055 ===================================================== 00:28:29.055 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:29.055 ===================================================== 00:28:29.055 Controller Capabilities/Features 00:28:29.055 ================================ 00:28:29.055 Vendor ID: 1b36 00:28:29.055 Subsystem Vendor ID: 1af4 00:28:29.055 Serial Number: 12340 00:28:29.055 Model Number: QEMU NVMe Ctrl 00:28:29.055 Firmware Version: 8.0.0 00:28:29.055 Recommended Arb Burst: 6 00:28:29.055 IEEE OUI Identifier: 00 54 52 00:28:29.055 Multi-path I/O 00:28:29.055 May have multiple subsystem ports: No 00:28:29.055 May have multiple controllers: No 00:28:29.055 Associated with SR-IOV VF: No 00:28:29.055 Max Data Transfer Size: 524288 00:28:29.055 Max Number of Namespaces: 256 00:28:29.055 Max Number of I/O Queues: 64 00:28:29.055 NVMe Specification Version (VS): 1.4 00:28:29.055 NVMe Specification Version (Identify): 1.4 00:28:29.055 Maximum Queue Entries: 2048 00:28:29.055 Contiguous Queues Required: Yes 00:28:29.055 Arbitration Mechanisms Supported 00:28:29.055 Weighted Round Robin: Not Supported 00:28:29.055 Vendor Specific: Not Supported 00:28:29.055 Reset Timeout: 7500 ms 00:28:29.055 Doorbell Stride: 4 bytes 00:28:29.055 NVM Subsystem Reset: Not Supported 00:28:29.055 Command Sets Supported 00:28:29.055 NVM Command Set: Supported 00:28:29.055 Boot Partition: Not Supported 00:28:29.055 Memory Page Size Minimum: 4096 bytes 00:28:29.055 Memory Page Size Maximum: 65536 bytes 00:28:29.055 Persistent Memory Region: Not Supported 00:28:29.055 Optional Asynchronous Events Supported 00:28:29.055 Namespace Attribute Notices: Supported 00:28:29.055 Firmware Activation Notices: Not Supported 00:28:29.055 ANA Change Notices: Not Supported 00:28:29.055 PLE Aggregate Log Change Notices: Not Supported 00:28:29.055 LBA Status Info Alert Notices: Not Supported 00:28:29.055 EGE Aggregate Log Change Notices: Not Supported 00:28:29.055 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.055 Zone Descriptor Change Notices: Not Supported 00:28:29.055 Discovery Log Change Notices: Not Supported 00:28:29.055 Controller Attributes 00:28:29.055 128-bit Host Identifier: Not Supported 00:28:29.055 Non-Operational Permissive Mode: Not Supported 00:28:29.055 NVM Sets: Not Supported 00:28:29.055 Read Recovery Levels: Not Supported 00:28:29.055 Endurance Groups: Not Supported 00:28:29.055 Predictable Latency Mode: Not Supported 00:28:29.055 Traffic Based Keep ALive: Not Supported 00:28:29.055 Namespace Granularity: Not Supported 00:28:29.055 SQ Associations: Not Supported 00:28:29.055 UUID List: Not Supported 00:28:29.055 Multi-Domain Subsystem: Not Supported 00:28:29.055 Fixed Capacity Management: Not Supported 00:28:29.055 Variable Capacity Management: Not Supported 00:28:29.055 Delete Endurance Group: Not Supported 00:28:29.055 Delete NVM Set: Not Supported 00:28:29.055 Extended LBA Formats Supported: Supported 00:28:29.055 Flexible Data Placement Supported: Not Supported 00:28:29.055 00:28:29.055 Controller Memory Buffer Support 00:28:29.055 ================================ 00:28:29.055 Supported: No 00:28:29.055 00:28:29.055 Persistent Memory Region Support 00:28:29.055 ================================ 00:28:29.055 Supported: No 00:28:29.055 00:28:29.055 Admin Command Set Attributes 00:28:29.055 ============================ 00:28:29.055 Security Send/Receive: Not Supported 00:28:29.055 Format NVM: Supported 00:28:29.055 Firmware Activate/Download: Not Supported 00:28:29.055 Namespace Management: Supported 00:28:29.055 Device Self-Test: Not Supported 00:28:29.055 Directives: Supported 00:28:29.055 NVMe-MI: Not Supported 00:28:29.055 Virtualization Management: Not Supported 00:28:29.055 Doorbell Buffer Config: Supported 00:28:29.055 Get LBA Status Capability: Not Supported 00:28:29.055 Command & Feature Lockdown Capability: Not Supported 00:28:29.055 Abort Command Limit: 4 00:28:29.055 Async Event Request Limit: 4 00:28:29.055 Number of Firmware Slots: N/A 00:28:29.055 Firmware Slot 1 Read-Only: N/A 00:28:29.055 Firmware Activation Without Reset: N/A 00:28:29.055 Multiple Update Detection Support: N/A 00:28:29.055 Firmware Update Granularity: No Information Provided 00:28:29.055 Per-Namespace SMART Log: Yes 00:28:29.055 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.055 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:29.055 Command Effects Log Page: Supported 00:28:29.055 Get Log Page Extended Data: Supported 00:28:29.055 Telemetry Log Pages: Not Supported 00:28:29.055 Persistent Event Log Pages: Not Supported 00:28:29.055 Supported Log Pages Log Page: May Support 00:28:29.055 Commands Supported & Effects Log Page: Not Supported 00:28:29.055 Feature Identifiers & Effects Log Page:May Support 00:28:29.055 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.055 Data Area 4 for Telemetry Log: Not Supported 00:28:29.055 Error Log Page Entries Supported: 1 00:28:29.055 Keep Alive: Not Supported 00:28:29.055 00:28:29.055 NVM Command Set Attributes 00:28:29.055 ========================== 00:28:29.055 Submission Queue Entry Size 00:28:29.055 Max: 64 00:28:29.055 Min: 64 00:28:29.055 Completion Queue Entry Size 00:28:29.055 Max: 16 00:28:29.055 Min: 16 00:28:29.055 Number of Namespaces: 256 00:28:29.055 Compare Command: Supported 00:28:29.055 Write Uncorrectable Command: Not Supported 00:28:29.055 Dataset Management Command: Supported 00:28:29.055 Write Zeroes Command: Supported 00:28:29.055 Set Features Save Field: Supported 00:28:29.055 Reservations: Not Supported 00:28:29.055 Timestamp: Supported 00:28:29.055 Copy: Supported 00:28:29.055 Volatile Write Cache: Present 00:28:29.055 Atomic Write Unit (Normal): 1 00:28:29.055 Atomic Write Unit (PFail): 1 00:28:29.055 Atomic Compare & Write Unit: 1 00:28:29.055 Fused Compare & Write: Not Supported 00:28:29.055 Scatter-Gather List 00:28:29.055 SGL Command Set: Supported 00:28:29.055 SGL Keyed: Not Supported 00:28:29.055 SGL Bit Bucket Descriptor: Not Supported 00:28:29.055 SGL Metadata Pointer: Not Supported 00:28:29.055 Oversized SGL: Not Supported 00:28:29.055 SGL Metadata Address: Not Supported 00:28:29.055 SGL Offset: Not Supported 00:28:29.055 Transport SGL Data Block: Not Supported 00:28:29.055 Replay Protected Memory Block: Not Supported 00:28:29.055 00:28:29.055 Firmware Slot Information 00:28:29.055 ========================= 00:28:29.055 Active slot: 1 00:28:29.055 Slot 1 Firmware Revision: 1.0 00:28:29.055 00:28:29.055 00:28:29.055 Commands Supported and Effects 00:28:29.055 ============================== 00:28:29.055 Admin Commands 00:28:29.055 -------------- 00:28:29.055 Delete I/O Submission Queue (00h): Supported 00:28:29.055 Create I/O Submission Queue (01h): Supported 00:28:29.056 Get Log Page (02h): Supported 00:28:29.056 Delete I/O Completion Queue (04h): Supported 00:28:29.056 Create I/O Completion Queue (05h): Supported 00:28:29.056 Identify (06h): Supported 00:28:29.056 Abort (08h): Supported 00:28:29.056 Set Features (09h): Supported 00:28:29.056 Get Features (0Ah): Supported 00:28:29.056 Asynchronous Event Request (0Ch): Supported 00:28:29.056 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:29.056 Directive Send (19h): Supported 00:28:29.056 Directive Receive (1Ah): Supported 00:28:29.056 Virtualization Management (1Ch): Supported 00:28:29.056 Doorbell Buffer Config (7Ch): Supported 00:28:29.056 Format NVM (80h): Supported LBA-Change 00:28:29.056 I/O Commands 00:28:29.056 ------------ 00:28:29.056 Flush (00h): Supported LBA-Change 00:28:29.056 Write (01h): Supported LBA-Change 00:28:29.056 Read (02h): Supported 00:28:29.056 Compare (05h): Supported 00:28:29.056 Write Zeroes (08h): Supported LBA-Change 00:28:29.056 Dataset Management (09h): Supported LBA-Change 00:28:29.056 Unknown (0Ch): Supported 00:28:29.056 Unknown (12h): Supported 00:28:29.056 Copy (19h): Supported LBA-Change 00:28:29.056 Unknown (1Dh): Supported LBA-Change 00:28:29.056 00:28:29.056 Error Log 00:28:29.056 ========= 00:28:29.056 00:28:29.056 Arbitration 00:28:29.056 =========== 00:28:29.056 Arbitration Burst: no limit 00:28:29.056 00:28:29.056 Power Management 00:28:29.056 ================ 00:28:29.056 Number of Power States: 1 00:28:29.056 Current Power State: Power State #0 00:28:29.056 Power State #0: 00:28:29.056 Max Power: 25.00 W 00:28:29.056 Non-Operational State: Operational 00:28:29.056 Entry Latency: 16 microseconds 00:28:29.056 Exit Latency: 4 microseconds 00:28:29.056 Relative Read Throughput: 0 00:28:29.056 Relative Read Latency: 0 00:28:29.056 Relative Write Throughput: 0 00:28:29.056 Relative Write Latency: 0 00:28:29.056 Idle Power: Not Reported 00:28:29.056 Active Power: Not Reported 00:28:29.056 Non-Operational Permissive Mode: Not Supported 00:28:29.056 00:28:29.056 Health Information 00:28:29.056 ================== 00:28:29.056 Critical Warnings: 00:28:29.056 Available Spare Space: OK 00:28:29.056 Temperature: OK 00:28:29.056 Device Reliability: OK 00:28:29.056 Read Only: No 00:28:29.056 Volatile Memory Backup: OK 00:28:29.056 Current Temperature: 323 Kelvin (50 Celsius) 00:28:29.056 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:29.056 Available Spare: 0% 00:28:29.056 Available Spare Threshold: 0% 00:28:29.056 Life Percentage Used: 0% 00:28:29.056 Data Units Read: 8105 00:28:29.056 Data Units Written: 3941 00:28:29.056 Host Read Commands: 375885 00:28:29.056 Host Write Commands: 202950 00:28:29.056 Controller Busy Time: 0 minutes 00:28:29.056 Power Cycles: 0 00:28:29.056 Power On Hours: 0 hours 00:28:29.056 Unsafe Shutdowns: 0 00:28:29.056 Unrecoverable Media Errors: 0 00:28:29.056 Lifetime Error Log Entries: 0 00:28:29.056 Warning Temperature Time: 0 minutes 00:28:29.056 Critical Temperature Time: 0 minutes 00:28:29.056 00:28:29.056 Number of Queues 00:28:29.056 ================ 00:28:29.056 Number of I/O Submission Queues: 64 00:28:29.056 Number of I/O Completion Queues: 64 00:28:29.056 00:28:29.056 ZNS Specific Controller Data 00:28:29.056 ============================ 00:28:29.056 Zone Append Size Limit: 0 00:28:29.056 00:28:29.056 00:28:29.056 Active Namespaces 00:28:29.056 ================= 00:28:29.056 Namespace ID:1 00:28:29.056 Error Recovery Timeout: Unlimited 00:28:29.056 Command Set Identifier: NVM (00h) 00:28:29.056 Deallocate: Supported 00:28:29.056 Deallocated/Unwritten Error: Supported 00:28:29.056 Deallocated Read Value: All 0x00 00:28:29.056 Deallocate in Write Zeroes: Not Supported 00:28:29.056 Deallocated Guard Field: 0xFFFF 00:28:29.056 Flush: Supported 00:28:29.056 Reservation: Not Supported 00:28:29.056 Namespace Sharing Capabilities: Private 00:28:29.056 Size (in LBAs): 1310720 (5GiB) 00:28:29.056 Capacity (in LBAs): 1310720 (5GiB) 00:28:29.056 Utilization (in LBAs): 1310720 (5GiB) 00:28:29.056 Thin Provisioning: Not Supported 00:28:29.056 Per-NS Atomic Units: No 00:28:29.056 Maximum Single Source Range Length: 128 00:28:29.056 Maximum Copy Length: 128 00:28:29.056 Maximum Source Range Count: 128 00:28:29.056 NGUID/EUI64 Never Reused: No 00:28:29.056 Namespace Write Protected: No 00:28:29.056 Number of LBA Formats: 8 00:28:29.056 Current LBA Format: LBA Format #04 00:28:29.056 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.056 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:29.056 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:29.056 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:29.056 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:29.056 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:29.056 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:29.056 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:29.056 00:28:29.056 21:50:49 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:28:29.056 21:50:49 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:28:29.315 ===================================================== 00:28:29.315 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:29.315 ===================================================== 00:28:29.315 Controller Capabilities/Features 00:28:29.315 ================================ 00:28:29.315 Vendor ID: 1b36 00:28:29.315 Subsystem Vendor ID: 1af4 00:28:29.315 Serial Number: 12340 00:28:29.315 Model Number: QEMU NVMe Ctrl 00:28:29.315 Firmware Version: 8.0.0 00:28:29.315 Recommended Arb Burst: 6 00:28:29.315 IEEE OUI Identifier: 00 54 52 00:28:29.315 Multi-path I/O 00:28:29.315 May have multiple subsystem ports: No 00:28:29.315 May have multiple controllers: No 00:28:29.315 Associated with SR-IOV VF: No 00:28:29.315 Max Data Transfer Size: 524288 00:28:29.315 Max Number of Namespaces: 256 00:28:29.315 Max Number of I/O Queues: 64 00:28:29.315 NVMe Specification Version (VS): 1.4 00:28:29.315 NVMe Specification Version (Identify): 1.4 00:28:29.315 Maximum Queue Entries: 2048 00:28:29.315 Contiguous Queues Required: Yes 00:28:29.315 Arbitration Mechanisms Supported 00:28:29.315 Weighted Round Robin: Not Supported 00:28:29.315 Vendor Specific: Not Supported 00:28:29.315 Reset Timeout: 7500 ms 00:28:29.315 Doorbell Stride: 4 bytes 00:28:29.315 NVM Subsystem Reset: Not Supported 00:28:29.315 Command Sets Supported 00:28:29.315 NVM Command Set: Supported 00:28:29.315 Boot Partition: Not Supported 00:28:29.315 Memory Page Size Minimum: 4096 bytes 00:28:29.315 Memory Page Size Maximum: 65536 bytes 00:28:29.315 Persistent Memory Region: Not Supported 00:28:29.315 Optional Asynchronous Events Supported 00:28:29.315 Namespace Attribute Notices: Supported 00:28:29.315 Firmware Activation Notices: Not Supported 00:28:29.315 ANA Change Notices: Not Supported 00:28:29.315 PLE Aggregate Log Change Notices: Not Supported 00:28:29.315 LBA Status Info Alert Notices: Not Supported 00:28:29.315 EGE Aggregate Log Change Notices: Not Supported 00:28:29.315 Normal NVM Subsystem Shutdown event: Not Supported 00:28:29.315 Zone Descriptor Change Notices: Not Supported 00:28:29.315 Discovery Log Change Notices: Not Supported 00:28:29.315 Controller Attributes 00:28:29.315 128-bit Host Identifier: Not Supported 00:28:29.315 Non-Operational Permissive Mode: Not Supported 00:28:29.315 NVM Sets: Not Supported 00:28:29.315 Read Recovery Levels: Not Supported 00:28:29.315 Endurance Groups: Not Supported 00:28:29.315 Predictable Latency Mode: Not Supported 00:28:29.315 Traffic Based Keep ALive: Not Supported 00:28:29.315 Namespace Granularity: Not Supported 00:28:29.315 SQ Associations: Not Supported 00:28:29.315 UUID List: Not Supported 00:28:29.315 Multi-Domain Subsystem: Not Supported 00:28:29.315 Fixed Capacity Management: Not Supported 00:28:29.315 Variable Capacity Management: Not Supported 00:28:29.315 Delete Endurance Group: Not Supported 00:28:29.315 Delete NVM Set: Not Supported 00:28:29.315 Extended LBA Formats Supported: Supported 00:28:29.315 Flexible Data Placement Supported: Not Supported 00:28:29.315 00:28:29.315 Controller Memory Buffer Support 00:28:29.315 ================================ 00:28:29.315 Supported: No 00:28:29.315 00:28:29.315 Persistent Memory Region Support 00:28:29.315 ================================ 00:28:29.315 Supported: No 00:28:29.315 00:28:29.315 Admin Command Set Attributes 00:28:29.315 ============================ 00:28:29.315 Security Send/Receive: Not Supported 00:28:29.315 Format NVM: Supported 00:28:29.315 Firmware Activate/Download: Not Supported 00:28:29.315 Namespace Management: Supported 00:28:29.315 Device Self-Test: Not Supported 00:28:29.315 Directives: Supported 00:28:29.315 NVMe-MI: Not Supported 00:28:29.315 Virtualization Management: Not Supported 00:28:29.315 Doorbell Buffer Config: Supported 00:28:29.315 Get LBA Status Capability: Not Supported 00:28:29.315 Command & Feature Lockdown Capability: Not Supported 00:28:29.315 Abort Command Limit: 4 00:28:29.315 Async Event Request Limit: 4 00:28:29.315 Number of Firmware Slots: N/A 00:28:29.315 Firmware Slot 1 Read-Only: N/A 00:28:29.315 Firmware Activation Without Reset: N/A 00:28:29.315 Multiple Update Detection Support: N/A 00:28:29.315 Firmware Update Granularity: No Information Provided 00:28:29.315 Per-Namespace SMART Log: Yes 00:28:29.315 Asymmetric Namespace Access Log Page: Not Supported 00:28:29.315 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:28:29.315 Command Effects Log Page: Supported 00:28:29.315 Get Log Page Extended Data: Supported 00:28:29.315 Telemetry Log Pages: Not Supported 00:28:29.315 Persistent Event Log Pages: Not Supported 00:28:29.315 Supported Log Pages Log Page: May Support 00:28:29.315 Commands Supported & Effects Log Page: Not Supported 00:28:29.315 Feature Identifiers & Effects Log Page:May Support 00:28:29.315 NVMe-MI Commands & Effects Log Page: May Support 00:28:29.315 Data Area 4 for Telemetry Log: Not Supported 00:28:29.315 Error Log Page Entries Supported: 1 00:28:29.315 Keep Alive: Not Supported 00:28:29.315 00:28:29.315 NVM Command Set Attributes 00:28:29.315 ========================== 00:28:29.315 Submission Queue Entry Size 00:28:29.315 Max: 64 00:28:29.315 Min: 64 00:28:29.315 Completion Queue Entry Size 00:28:29.315 Max: 16 00:28:29.315 Min: 16 00:28:29.315 Number of Namespaces: 256 00:28:29.315 Compare Command: Supported 00:28:29.315 Write Uncorrectable Command: Not Supported 00:28:29.315 Dataset Management Command: Supported 00:28:29.315 Write Zeroes Command: Supported 00:28:29.315 Set Features Save Field: Supported 00:28:29.315 Reservations: Not Supported 00:28:29.315 Timestamp: Supported 00:28:29.315 Copy: Supported 00:28:29.315 Volatile Write Cache: Present 00:28:29.315 Atomic Write Unit (Normal): 1 00:28:29.315 Atomic Write Unit (PFail): 1 00:28:29.315 Atomic Compare & Write Unit: 1 00:28:29.315 Fused Compare & Write: Not Supported 00:28:29.315 Scatter-Gather List 00:28:29.315 SGL Command Set: Supported 00:28:29.315 SGL Keyed: Not Supported 00:28:29.315 SGL Bit Bucket Descriptor: Not Supported 00:28:29.315 SGL Metadata Pointer: Not Supported 00:28:29.315 Oversized SGL: Not Supported 00:28:29.315 SGL Metadata Address: Not Supported 00:28:29.315 SGL Offset: Not Supported 00:28:29.315 Transport SGL Data Block: Not Supported 00:28:29.315 Replay Protected Memory Block: Not Supported 00:28:29.315 00:28:29.315 Firmware Slot Information 00:28:29.315 ========================= 00:28:29.315 Active slot: 1 00:28:29.315 Slot 1 Firmware Revision: 1.0 00:28:29.315 00:28:29.315 00:28:29.315 Commands Supported and Effects 00:28:29.315 ============================== 00:28:29.315 Admin Commands 00:28:29.315 -------------- 00:28:29.315 Delete I/O Submission Queue (00h): Supported 00:28:29.315 Create I/O Submission Queue (01h): Supported 00:28:29.315 Get Log Page (02h): Supported 00:28:29.315 Delete I/O Completion Queue (04h): Supported 00:28:29.315 Create I/O Completion Queue (05h): Supported 00:28:29.315 Identify (06h): Supported 00:28:29.315 Abort (08h): Supported 00:28:29.315 Set Features (09h): Supported 00:28:29.315 Get Features (0Ah): Supported 00:28:29.315 Asynchronous Event Request (0Ch): Supported 00:28:29.315 Namespace Attachment (15h): Supported NS-Inventory-Change 00:28:29.315 Directive Send (19h): Supported 00:28:29.315 Directive Receive (1Ah): Supported 00:28:29.315 Virtualization Management (1Ch): Supported 00:28:29.315 Doorbell Buffer Config (7Ch): Supported 00:28:29.315 Format NVM (80h): Supported LBA-Change 00:28:29.315 I/O Commands 00:28:29.315 ------------ 00:28:29.315 Flush (00h): Supported LBA-Change 00:28:29.315 Write (01h): Supported LBA-Change 00:28:29.315 Read (02h): Supported 00:28:29.315 Compare (05h): Supported 00:28:29.315 Write Zeroes (08h): Supported LBA-Change 00:28:29.316 Dataset Management (09h): Supported LBA-Change 00:28:29.316 Unknown (0Ch): Supported 00:28:29.316 Unknown (12h): Supported 00:28:29.316 Copy (19h): Supported LBA-Change 00:28:29.316 Unknown (1Dh): Supported LBA-Change 00:28:29.316 00:28:29.316 Error Log 00:28:29.316 ========= 00:28:29.316 00:28:29.316 Arbitration 00:28:29.316 =========== 00:28:29.316 Arbitration Burst: no limit 00:28:29.316 00:28:29.316 Power Management 00:28:29.316 ================ 00:28:29.316 Number of Power States: 1 00:28:29.316 Current Power State: Power State #0 00:28:29.316 Power State #0: 00:28:29.316 Max Power: 25.00 W 00:28:29.316 Non-Operational State: Operational 00:28:29.316 Entry Latency: 16 microseconds 00:28:29.316 Exit Latency: 4 microseconds 00:28:29.316 Relative Read Throughput: 0 00:28:29.316 Relative Read Latency: 0 00:28:29.316 Relative Write Throughput: 0 00:28:29.316 Relative Write Latency: 0 00:28:29.575 Idle Power: Not Reported 00:28:29.575 Active Power: Not Reported 00:28:29.575 Non-Operational Permissive Mode: Not Supported 00:28:29.575 00:28:29.575 Health Information 00:28:29.575 ================== 00:28:29.575 Critical Warnings: 00:28:29.575 Available Spare Space: OK 00:28:29.575 Temperature: OK 00:28:29.575 Device Reliability: OK 00:28:29.575 Read Only: No 00:28:29.575 Volatile Memory Backup: OK 00:28:29.575 Current Temperature: 323 Kelvin (50 Celsius) 00:28:29.575 Temperature Threshold: 343 Kelvin (70 Celsius) 00:28:29.575 Available Spare: 0% 00:28:29.575 Available Spare Threshold: 0% 00:28:29.575 Life Percentage Used: 0% 00:28:29.575 Data Units Read: 8105 00:28:29.575 Data Units Written: 3941 00:28:29.575 Host Read Commands: 375885 00:28:29.575 Host Write Commands: 202950 00:28:29.575 Controller Busy Time: 0 minutes 00:28:29.575 Power Cycles: 0 00:28:29.575 Power On Hours: 0 hours 00:28:29.575 Unsafe Shutdowns: 0 00:28:29.575 Unrecoverable Media Errors: 0 00:28:29.575 Lifetime Error Log Entries: 0 00:28:29.575 Warning Temperature Time: 0 minutes 00:28:29.575 Critical Temperature Time: 0 minutes 00:28:29.575 00:28:29.575 Number of Queues 00:28:29.575 ================ 00:28:29.575 Number of I/O Submission Queues: 64 00:28:29.575 Number of I/O Completion Queues: 64 00:28:29.575 00:28:29.575 ZNS Specific Controller Data 00:28:29.575 ============================ 00:28:29.575 Zone Append Size Limit: 0 00:28:29.575 00:28:29.575 00:28:29.575 Active Namespaces 00:28:29.575 ================= 00:28:29.575 Namespace ID:1 00:28:29.575 Error Recovery Timeout: Unlimited 00:28:29.575 Command Set Identifier: NVM (00h) 00:28:29.575 Deallocate: Supported 00:28:29.575 Deallocated/Unwritten Error: Supported 00:28:29.575 Deallocated Read Value: All 0x00 00:28:29.575 Deallocate in Write Zeroes: Not Supported 00:28:29.575 Deallocated Guard Field: 0xFFFF 00:28:29.575 Flush: Supported 00:28:29.575 Reservation: Not Supported 00:28:29.575 Namespace Sharing Capabilities: Private 00:28:29.575 Size (in LBAs): 1310720 (5GiB) 00:28:29.575 Capacity (in LBAs): 1310720 (5GiB) 00:28:29.575 Utilization (in LBAs): 1310720 (5GiB) 00:28:29.575 Thin Provisioning: Not Supported 00:28:29.575 Per-NS Atomic Units: No 00:28:29.575 Maximum Single Source Range Length: 128 00:28:29.575 Maximum Copy Length: 128 00:28:29.575 Maximum Source Range Count: 128 00:28:29.575 NGUID/EUI64 Never Reused: No 00:28:29.575 Namespace Write Protected: No 00:28:29.575 Number of LBA Formats: 8 00:28:29.575 Current LBA Format: LBA Format #04 00:28:29.575 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:29.575 LBA Format #01: Data Size: 512 Metadata Size: 8 00:28:29.575 LBA Format #02: Data Size: 512 Metadata Size: 16 00:28:29.575 LBA Format #03: Data Size: 512 Metadata Size: 64 00:28:29.575 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:28:29.575 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:28:29.575 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:28:29.575 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:28:29.575 00:28:29.575 00:28:29.575 real 0m0.660s 00:28:29.575 user 0m0.231s 00:28:29.575 sys 0m0.359s 00:28:29.575 21:50:49 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:29.575 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.575 ************************************ 00:28:29.575 END TEST nvme_identify 00:28:29.575 ************************************ 00:28:29.575 21:50:49 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:28:29.575 21:50:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:29.575 21:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:29.575 21:50:49 -- common/autotest_common.sh@10 -- # set +x 00:28:29.575 ************************************ 00:28:29.575 START TEST nvme_perf 00:28:29.575 ************************************ 00:28:29.575 21:50:49 -- common/autotest_common.sh@1114 -- # nvme_perf 00:28:29.575 21:50:49 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:28:30.953 Initializing NVMe Controllers 00:28:30.953 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:30.953 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:30.953 Initialization complete. Launching workers. 00:28:30.953 ======================================================== 00:28:30.953 Latency(us) 00:28:30.953 Device Information : IOPS MiB/s Average min max 00:28:30.953 PCIE (0000:00:06.0) NSID 1 from core 0: 58367.94 684.00 2190.95 1138.94 6622.50 00:28:30.953 ======================================================== 00:28:30.953 Total : 58367.94 684.00 2190.95 1138.94 6622.50 00:28:30.953 00:28:30.953 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:30.953 ================================================================================= 00:28:30.953 1.00000% : 1310.720us 00:28:30.953 10.00000% : 1504.349us 00:28:30.953 25.00000% : 1750.109us 00:28:30.953 50.00000% : 2174.604us 00:28:30.953 75.00000% : 2606.545us 00:28:30.953 90.00000% : 2859.753us 00:28:30.953 95.00000% : 3068.276us 00:28:30.953 98.00000% : 3321.484us 00:28:30.953 99.00000% : 3425.745us 00:28:30.953 99.50000% : 3530.007us 00:28:30.953 99.90000% : 4796.044us 00:28:30.953 99.99000% : 6434.444us 00:28:30.953 99.99900% : 6642.967us 00:28:30.953 99.99990% : 6642.967us 00:28:30.953 99.99999% : 6642.967us 00:28:30.953 00:28:30.953 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:30.953 ============================================================================== 00:28:30.953 Range in us Cumulative IO count 00:28:30.953 1131.985 - 1139.433: 0.0017% ( 1) 00:28:30.953 1139.433 - 1146.880: 0.0034% ( 1) 00:28:30.953 1154.327 - 1161.775: 0.0051% ( 1) 00:28:30.953 1169.222 - 1176.669: 0.0069% ( 1) 00:28:30.953 1184.116 - 1191.564: 0.0103% ( 2) 00:28:30.953 1191.564 - 1199.011: 0.0154% ( 3) 00:28:30.953 1199.011 - 1206.458: 0.0206% ( 3) 00:28:30.953 1206.458 - 1213.905: 0.0343% ( 8) 00:28:30.953 1213.905 - 1221.353: 0.0480% ( 8) 00:28:30.953 1221.353 - 1228.800: 0.0702% ( 13) 00:28:30.953 1228.800 - 1236.247: 0.0857% ( 9) 00:28:30.953 1236.247 - 1243.695: 0.1148% ( 17) 00:28:30.953 1243.695 - 1251.142: 0.1645% ( 29) 00:28:30.953 1251.142 - 1258.589: 0.2005% ( 21) 00:28:30.953 1258.589 - 1266.036: 0.2673% ( 39) 00:28:30.953 1266.036 - 1273.484: 0.3409% ( 43) 00:28:30.953 1273.484 - 1280.931: 0.4420% ( 59) 00:28:30.953 1280.931 - 1288.378: 0.5637% ( 71) 00:28:30.953 1288.378 - 1295.825: 0.6956% ( 77) 00:28:30.953 1295.825 - 1303.273: 0.8412% ( 85) 00:28:30.953 1303.273 - 1310.720: 1.0074% ( 97) 00:28:30.953 1310.720 - 1318.167: 1.2044% ( 115) 00:28:30.953 1318.167 - 1325.615: 1.4289% ( 131) 00:28:30.953 1325.615 - 1333.062: 1.6602% ( 135) 00:28:30.953 1333.062 - 1340.509: 1.9309% ( 158) 00:28:30.953 1340.509 - 1347.956: 2.1930% ( 153) 00:28:30.953 1347.956 - 1355.404: 2.4894% ( 173) 00:28:30.953 1355.404 - 1362.851: 2.8063% ( 185) 00:28:30.953 1362.851 - 1370.298: 3.1027% ( 173) 00:28:30.953 1370.298 - 1377.745: 3.4608% ( 209) 00:28:30.953 1377.745 - 1385.193: 3.8086% ( 203) 00:28:30.953 1385.193 - 1392.640: 4.1581% ( 204) 00:28:30.953 1392.640 - 1400.087: 4.5659% ( 238) 00:28:30.953 1400.087 - 1407.535: 4.9445% ( 221) 00:28:30.953 1407.535 - 1414.982: 5.3488% ( 236) 00:28:30.953 1414.982 - 1422.429: 5.7480% ( 233) 00:28:30.953 1422.429 - 1429.876: 6.1404% ( 229) 00:28:30.953 1429.876 - 1437.324: 6.5464% ( 237) 00:28:30.953 1437.324 - 1444.771: 6.9661% ( 245) 00:28:30.953 1444.771 - 1452.218: 7.3962% ( 251) 00:28:30.953 1452.218 - 1459.665: 7.8022% ( 237) 00:28:30.953 1459.665 - 1467.113: 8.2408% ( 256) 00:28:30.953 1467.113 - 1474.560: 8.6691% ( 250) 00:28:30.953 1474.560 - 1482.007: 9.1112% ( 258) 00:28:30.953 1482.007 - 1489.455: 9.5429% ( 252) 00:28:30.953 1489.455 - 1496.902: 9.9935% ( 263) 00:28:30.953 1496.902 - 1504.349: 10.4287% ( 254) 00:28:30.953 1504.349 - 1511.796: 10.8621% ( 253) 00:28:30.953 1511.796 - 1519.244: 11.3127% ( 263) 00:28:30.953 1519.244 - 1526.691: 11.7479% ( 254) 00:28:30.953 1526.691 - 1534.138: 12.1728% ( 248) 00:28:30.953 1534.138 - 1541.585: 12.6302% ( 267) 00:28:30.953 1541.585 - 1549.033: 13.0962% ( 272) 00:28:30.953 1549.033 - 1556.480: 13.5211% ( 248) 00:28:30.953 1556.480 - 1563.927: 13.9820% ( 269) 00:28:30.953 1563.927 - 1571.375: 14.4274% ( 260) 00:28:30.953 1571.375 - 1578.822: 14.8523% ( 248) 00:28:30.953 1578.822 - 1586.269: 15.3063% ( 265) 00:28:30.953 1586.269 - 1593.716: 15.7603% ( 265) 00:28:30.953 1593.716 - 1601.164: 16.2007% ( 257) 00:28:30.953 1601.164 - 1608.611: 16.6581% ( 267) 00:28:30.953 1608.611 - 1616.058: 17.0779% ( 245) 00:28:30.953 1616.058 - 1623.505: 17.5302% ( 264) 00:28:30.953 1623.505 - 1630.953: 17.9705% ( 257) 00:28:30.953 1630.953 - 1638.400: 18.4330% ( 270) 00:28:30.953 1638.400 - 1645.847: 18.8682% ( 254) 00:28:30.953 1645.847 - 1653.295: 19.3188% ( 263) 00:28:30.953 1653.295 - 1660.742: 19.7625% ( 259) 00:28:30.953 1660.742 - 1668.189: 20.1994% ( 255) 00:28:30.953 1668.189 - 1675.636: 20.6432% ( 259) 00:28:30.953 1675.636 - 1683.084: 21.0955% ( 264) 00:28:30.953 1683.084 - 1690.531: 21.5169% ( 246) 00:28:30.953 1690.531 - 1697.978: 21.9744% ( 267) 00:28:30.953 1697.978 - 1705.425: 22.4147% ( 257) 00:28:30.953 1705.425 - 1712.873: 22.8396% ( 248) 00:28:30.953 1712.873 - 1720.320: 23.2782% ( 256) 00:28:30.953 1720.320 - 1727.767: 23.7270% ( 262) 00:28:30.953 1727.767 - 1735.215: 24.1948% ( 273) 00:28:30.953 1735.215 - 1742.662: 24.6299% ( 254) 00:28:30.953 1742.662 - 1750.109: 25.0497% ( 245) 00:28:30.953 1750.109 - 1757.556: 25.5208% ( 275) 00:28:30.953 1757.556 - 1765.004: 25.9611% ( 257) 00:28:30.953 1765.004 - 1772.451: 26.4169% ( 266) 00:28:30.953 1772.451 - 1779.898: 26.8435% ( 249) 00:28:30.953 1779.898 - 1787.345: 27.2975% ( 265) 00:28:30.953 1787.345 - 1794.793: 27.7584% ( 269) 00:28:30.954 1794.793 - 1802.240: 28.2141% ( 266) 00:28:30.954 1802.240 - 1809.687: 28.6527% ( 256) 00:28:30.954 1809.687 - 1817.135: 29.1170% ( 271) 00:28:30.954 1817.135 - 1824.582: 29.5641% ( 261) 00:28:30.954 1824.582 - 1832.029: 30.0010% ( 255) 00:28:30.954 1832.029 - 1839.476: 30.4396% ( 256) 00:28:30.954 1839.476 - 1846.924: 30.8919% ( 264) 00:28:30.954 1846.924 - 1854.371: 31.3288% ( 255) 00:28:30.954 1854.371 - 1861.818: 31.7794% ( 263) 00:28:30.954 1861.818 - 1869.265: 32.2420% ( 270) 00:28:30.954 1869.265 - 1876.713: 32.6446% ( 235) 00:28:30.954 1876.713 - 1884.160: 33.1106% ( 272) 00:28:30.954 1884.160 - 1891.607: 33.5543% ( 259) 00:28:30.954 1891.607 - 1899.055: 33.9792% ( 248) 00:28:30.954 1899.055 - 1906.502: 34.4504% ( 275) 00:28:30.954 1906.502 - 1921.396: 35.3070% ( 500) 00:28:30.954 1921.396 - 1936.291: 36.1637% ( 500) 00:28:30.954 1936.291 - 1951.185: 37.0220% ( 501) 00:28:30.954 1951.185 - 1966.080: 37.8958% ( 510) 00:28:30.954 1966.080 - 1980.975: 38.7524% ( 500) 00:28:30.954 1980.975 - 1995.869: 39.6090% ( 500) 00:28:30.954 1995.869 - 2010.764: 40.4982% ( 519) 00:28:30.954 2010.764 - 2025.658: 41.3480% ( 496) 00:28:30.954 2025.658 - 2040.553: 42.2475% ( 525) 00:28:30.954 2040.553 - 2055.447: 43.1264% ( 513) 00:28:30.954 2055.447 - 2070.342: 44.0121% ( 517) 00:28:30.954 2070.342 - 2085.236: 44.8756% ( 504) 00:28:30.954 2085.236 - 2100.131: 45.7768% ( 526) 00:28:30.954 2100.131 - 2115.025: 46.6814% ( 528) 00:28:30.954 2115.025 - 2129.920: 47.5740% ( 521) 00:28:30.954 2129.920 - 2144.815: 48.4478% ( 510) 00:28:30.954 2144.815 - 2159.709: 49.3164% ( 507) 00:28:30.954 2159.709 - 2174.604: 50.2227% ( 529) 00:28:30.954 2174.604 - 2189.498: 51.1136% ( 520) 00:28:30.954 2189.498 - 2204.393: 51.9703% ( 500) 00:28:30.954 2204.393 - 2219.287: 52.8629% ( 521) 00:28:30.954 2219.287 - 2234.182: 53.7623% ( 525) 00:28:30.954 2234.182 - 2249.076: 54.6241% ( 503) 00:28:30.954 2249.076 - 2263.971: 55.5030% ( 513) 00:28:30.954 2263.971 - 2278.865: 56.3905% ( 518) 00:28:30.954 2278.865 - 2293.760: 57.2231% ( 486) 00:28:30.954 2293.760 - 2308.655: 58.0952% ( 509) 00:28:30.954 2308.655 - 2323.549: 58.9792% ( 516) 00:28:30.954 2323.549 - 2338.444: 59.8701% ( 520) 00:28:30.954 2338.444 - 2353.338: 60.7490% ( 513) 00:28:30.954 2353.338 - 2368.233: 61.6194% ( 508) 00:28:30.954 2368.233 - 2383.127: 62.5223% ( 527) 00:28:30.954 2383.127 - 2398.022: 63.4046% ( 515) 00:28:30.954 2398.022 - 2412.916: 64.3109% ( 529) 00:28:30.954 2412.916 - 2427.811: 65.2104% ( 525) 00:28:30.954 2427.811 - 2442.705: 66.0739% ( 504) 00:28:30.954 2442.705 - 2457.600: 66.9562% ( 515) 00:28:30.954 2457.600 - 2472.495: 67.8677% ( 532) 00:28:30.954 2472.495 - 2487.389: 68.7449% ( 512) 00:28:30.954 2487.389 - 2502.284: 69.6478% ( 527) 00:28:30.954 2502.284 - 2517.178: 70.5421% ( 522) 00:28:30.954 2517.178 - 2532.073: 71.4176% ( 511) 00:28:30.954 2532.073 - 2546.967: 72.3085% ( 520) 00:28:30.954 2546.967 - 2561.862: 73.2062% ( 524) 00:28:30.954 2561.862 - 2576.756: 74.0920% ( 517) 00:28:30.954 2576.756 - 2591.651: 74.9931% ( 526) 00:28:30.954 2591.651 - 2606.545: 75.8686% ( 511) 00:28:30.954 2606.545 - 2621.440: 76.7681% ( 525) 00:28:30.954 2621.440 - 2636.335: 77.6641% ( 523) 00:28:30.954 2636.335 - 2651.229: 78.5448% ( 514) 00:28:30.954 2651.229 - 2666.124: 79.4065% ( 503) 00:28:30.954 2666.124 - 2681.018: 80.3283% ( 538) 00:28:30.954 2681.018 - 2695.913: 81.2226% ( 522) 00:28:30.954 2695.913 - 2710.807: 82.1066% ( 516) 00:28:30.954 2710.807 - 2725.702: 82.9890% ( 515) 00:28:30.954 2725.702 - 2740.596: 83.8730% ( 516) 00:28:30.954 2740.596 - 2755.491: 84.7656% ( 521) 00:28:30.954 2755.491 - 2770.385: 85.6188% ( 498) 00:28:30.954 2770.385 - 2785.280: 86.4549% ( 488) 00:28:30.954 2785.280 - 2800.175: 87.2824% ( 483) 00:28:30.954 2800.175 - 2815.069: 88.0705% ( 460) 00:28:30.954 2815.069 - 2829.964: 88.8175% ( 436) 00:28:30.954 2829.964 - 2844.858: 89.5097% ( 404) 00:28:30.954 2844.858 - 2859.753: 90.1487% ( 373) 00:28:30.954 2859.753 - 2874.647: 90.7569% ( 355) 00:28:30.954 2874.647 - 2889.542: 91.3052% ( 320) 00:28:30.954 2889.542 - 2904.436: 91.7986% ( 288) 00:28:30.954 2904.436 - 2919.331: 92.2338% ( 254) 00:28:30.954 2919.331 - 2934.225: 92.6484% ( 242) 00:28:30.954 2934.225 - 2949.120: 93.0201% ( 217) 00:28:30.954 2949.120 - 2964.015: 93.3628% ( 200) 00:28:30.954 2964.015 - 2978.909: 93.6643% ( 176) 00:28:30.954 2978.909 - 2993.804: 93.9265% ( 153) 00:28:30.954 2993.804 - 3008.698: 94.1749% ( 145) 00:28:30.954 3008.698 - 3023.593: 94.3890% ( 125) 00:28:30.954 3023.593 - 3038.487: 94.6066% ( 127) 00:28:30.954 3038.487 - 3053.382: 94.8191% ( 124) 00:28:30.954 3053.382 - 3068.276: 95.0195% ( 117) 00:28:30.954 3068.276 - 3083.171: 95.2183% ( 116) 00:28:30.954 3083.171 - 3098.065: 95.4102% ( 112) 00:28:30.954 3098.065 - 3112.960: 95.6072% ( 115) 00:28:30.954 3112.960 - 3127.855: 95.8025% ( 114) 00:28:30.954 3127.855 - 3142.749: 95.9858% ( 107) 00:28:30.954 3142.749 - 3157.644: 96.1606% ( 102) 00:28:30.954 3157.644 - 3172.538: 96.3370% ( 103) 00:28:30.954 3172.538 - 3187.433: 96.5186% ( 106) 00:28:30.954 3187.433 - 3202.327: 96.7020% ( 107) 00:28:30.954 3202.327 - 3217.222: 96.8853% ( 107) 00:28:30.954 3217.222 - 3232.116: 97.0583% ( 101) 00:28:30.954 3232.116 - 3247.011: 97.2314% ( 101) 00:28:30.954 3247.011 - 3261.905: 97.4010% ( 99) 00:28:30.954 3261.905 - 3276.800: 97.5774% ( 103) 00:28:30.954 3276.800 - 3291.695: 97.7522% ( 102) 00:28:30.954 3291.695 - 3306.589: 97.9201% ( 98) 00:28:30.954 3306.589 - 3321.484: 98.0983% ( 104) 00:28:30.954 3321.484 - 3336.378: 98.2593% ( 94) 00:28:30.954 3336.378 - 3351.273: 98.4135% ( 90) 00:28:30.954 3351.273 - 3366.167: 98.5711% ( 92) 00:28:30.954 3366.167 - 3381.062: 98.7133% ( 83) 00:28:30.954 3381.062 - 3395.956: 98.8487% ( 79) 00:28:30.954 3395.956 - 3410.851: 98.9566% ( 63) 00:28:30.954 3410.851 - 3425.745: 99.0628% ( 62) 00:28:30.954 3425.745 - 3440.640: 99.1588% ( 56) 00:28:30.954 3440.640 - 3455.535: 99.2496% ( 53) 00:28:30.954 3455.535 - 3470.429: 99.3267% ( 45) 00:28:30.954 3470.429 - 3485.324: 99.3952% ( 40) 00:28:30.954 3485.324 - 3500.218: 99.4518% ( 33) 00:28:30.954 3500.218 - 3515.113: 99.4997% ( 28) 00:28:30.954 3515.113 - 3530.007: 99.5443% ( 26) 00:28:30.954 3530.007 - 3544.902: 99.5837% ( 23) 00:28:30.954 3544.902 - 3559.796: 99.6128% ( 17) 00:28:30.954 3559.796 - 3574.691: 99.6436% ( 18) 00:28:30.954 3574.691 - 3589.585: 99.6642% ( 12) 00:28:30.954 3589.585 - 3604.480: 99.6830% ( 11) 00:28:30.954 3604.480 - 3619.375: 99.6985% ( 9) 00:28:30.954 3619.375 - 3634.269: 99.7139% ( 9) 00:28:30.954 3634.269 - 3649.164: 99.7242% ( 6) 00:28:30.954 3649.164 - 3664.058: 99.7344% ( 6) 00:28:30.954 3664.058 - 3678.953: 99.7447% ( 6) 00:28:30.954 3678.953 - 3693.847: 99.7499% ( 3) 00:28:30.954 3693.847 - 3708.742: 99.7550% ( 3) 00:28:30.954 3708.742 - 3723.636: 99.7601% ( 3) 00:28:30.954 3723.636 - 3738.531: 99.7636% ( 2) 00:28:30.954 3738.531 - 3753.425: 99.7687% ( 3) 00:28:30.954 3753.425 - 3768.320: 99.7721% ( 2) 00:28:30.954 3768.320 - 3783.215: 99.7756% ( 2) 00:28:30.954 3783.215 - 3798.109: 99.7790% ( 2) 00:28:30.954 3798.109 - 3813.004: 99.7841% ( 3) 00:28:30.954 3813.004 - 3842.793: 99.7927% ( 5) 00:28:30.954 3842.793 - 3872.582: 99.7995% ( 4) 00:28:30.954 3872.582 - 3902.371: 99.8064% ( 4) 00:28:30.954 3902.371 - 3932.160: 99.8115% ( 3) 00:28:30.954 3932.160 - 3961.949: 99.8150% ( 2) 00:28:30.954 3961.949 - 3991.738: 99.8201% ( 3) 00:28:30.954 3991.738 - 4021.527: 99.8252% ( 3) 00:28:30.954 4021.527 - 4051.316: 99.8304% ( 3) 00:28:30.954 4051.316 - 4081.105: 99.8355% ( 3) 00:28:30.954 4081.105 - 4110.895: 99.8390% ( 2) 00:28:30.954 4110.895 - 4140.684: 99.8441% ( 3) 00:28:30.954 4140.684 - 4170.473: 99.8492% ( 3) 00:28:30.954 4170.473 - 4200.262: 99.8544% ( 3) 00:28:30.954 4200.262 - 4230.051: 99.8595% ( 3) 00:28:30.954 4230.051 - 4259.840: 99.8629% ( 2) 00:28:30.954 4259.840 - 4289.629: 99.8681% ( 3) 00:28:30.954 4289.629 - 4319.418: 99.8715% ( 2) 00:28:30.954 4319.418 - 4349.207: 99.8766% ( 3) 00:28:30.954 4349.207 - 4378.996: 99.8784% ( 1) 00:28:30.954 4378.996 - 4408.785: 99.8801% ( 1) 00:28:30.954 4408.785 - 4438.575: 99.8818% ( 1) 00:28:30.954 4438.575 - 4468.364: 99.8835% ( 1) 00:28:30.954 4468.364 - 4498.153: 99.8852% ( 1) 00:28:30.954 4498.153 - 4527.942: 99.8869% ( 1) 00:28:30.954 4527.942 - 4557.731: 99.8886% ( 1) 00:28:30.954 4557.731 - 4587.520: 99.8904% ( 1) 00:28:30.954 4587.520 - 4617.309: 99.8921% ( 1) 00:28:30.954 4647.098 - 4676.887: 99.8938% ( 1) 00:28:30.954 4676.887 - 4706.676: 99.8955% ( 1) 00:28:30.954 4706.676 - 4736.465: 99.8972% ( 1) 00:28:30.954 4736.465 - 4766.255: 99.8989% ( 1) 00:28:30.954 4766.255 - 4796.044: 99.9006% ( 1) 00:28:30.954 4796.044 - 4825.833: 99.9023% ( 1) 00:28:30.954 4825.833 - 4855.622: 99.9041% ( 1) 00:28:30.954 4855.622 - 4885.411: 99.9058% ( 1) 00:28:30.954 4885.411 - 4915.200: 99.9075% ( 1) 00:28:30.954 4915.200 - 4944.989: 99.9092% ( 1) 00:28:30.954 4944.989 - 4974.778: 99.9109% ( 1) 00:28:30.954 4974.778 - 5004.567: 99.9126% ( 1) 00:28:30.954 5004.567 - 5034.356: 99.9143% ( 1) 00:28:30.954 5034.356 - 5064.145: 99.9160% ( 1) 00:28:30.954 5064.145 - 5093.935: 99.9178% ( 1) 00:28:30.954 5093.935 - 5123.724: 99.9195% ( 1) 00:28:30.954 5123.724 - 5153.513: 99.9212% ( 1) 00:28:30.954 5153.513 - 5183.302: 99.9229% ( 1) 00:28:30.954 5183.302 - 5213.091: 99.9246% ( 1) 00:28:30.954 5213.091 - 5242.880: 99.9263% ( 1) 00:28:30.954 5242.880 - 5272.669: 99.9280% ( 1) 00:28:30.954 5272.669 - 5302.458: 99.9298% ( 1) 00:28:30.954 5302.458 - 5332.247: 99.9315% ( 1) 00:28:30.954 5332.247 - 5362.036: 99.9332% ( 1) 00:28:30.954 5362.036 - 5391.825: 99.9349% ( 1) 00:28:30.954 5391.825 - 5421.615: 99.9366% ( 1) 00:28:30.954 5421.615 - 5451.404: 99.9383% ( 1) 00:28:30.954 5451.404 - 5481.193: 99.9400% ( 1) 00:28:30.954 5481.193 - 5510.982: 99.9417% ( 1) 00:28:30.954 5510.982 - 5540.771: 99.9435% ( 1) 00:28:30.954 5540.771 - 5570.560: 99.9452% ( 1) 00:28:30.954 5570.560 - 5600.349: 99.9469% ( 1) 00:28:30.954 5600.349 - 5630.138: 99.9486% ( 1) 00:28:30.954 5630.138 - 5659.927: 99.9503% ( 1) 00:28:30.954 5659.927 - 5689.716: 99.9520% ( 1) 00:28:30.954 5689.716 - 5719.505: 99.9537% ( 1) 00:28:30.954 5719.505 - 5749.295: 99.9555% ( 1) 00:28:30.954 5749.295 - 5779.084: 99.9572% ( 1) 00:28:30.954 5808.873 - 5838.662: 99.9589% ( 1) 00:28:30.954 5838.662 - 5868.451: 99.9606% ( 1) 00:28:30.954 5868.451 - 5898.240: 99.9623% ( 1) 00:28:30.954 5898.240 - 5928.029: 99.9640% ( 1) 00:28:30.954 5928.029 - 5957.818: 99.9657% ( 1) 00:28:30.954 5957.818 - 5987.607: 99.9674% ( 1) 00:28:30.954 6017.396 - 6047.185: 99.9692% ( 1) 00:28:30.954 6047.185 - 6076.975: 99.9709% ( 1) 00:28:30.954 6076.975 - 6106.764: 99.9726% ( 1) 00:28:30.954 6106.764 - 6136.553: 99.9743% ( 1) 00:28:30.954 6136.553 - 6166.342: 99.9760% ( 1) 00:28:30.954 6166.342 - 6196.131: 99.9777% ( 1) 00:28:30.954 6196.131 - 6225.920: 99.9794% ( 1) 00:28:30.954 6225.920 - 6255.709: 99.9812% ( 1) 00:28:30.954 6255.709 - 6285.498: 99.9829% ( 1) 00:28:30.954 6285.498 - 6315.287: 99.9846% ( 1) 00:28:30.954 6345.076 - 6374.865: 99.9880% ( 2) 00:28:30.954 6374.865 - 6404.655: 99.9897% ( 1) 00:28:30.954 6404.655 - 6434.444: 99.9914% ( 1) 00:28:30.954 6434.444 - 6464.233: 99.9931% ( 1) 00:28:30.954 6464.233 - 6494.022: 99.9949% ( 1) 00:28:30.954 6494.022 - 6523.811: 99.9966% ( 1) 00:28:30.954 6523.811 - 6553.600: 99.9983% ( 1) 00:28:30.954 6613.178 - 6642.967: 100.0000% ( 1) 00:28:30.954 00:28:30.954 21:50:51 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:28:32.332 Initializing NVMe Controllers 00:28:32.332 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:32.332 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:32.332 Initialization complete. Launching workers. 00:28:32.332 ======================================================== 00:28:32.332 Latency(us) 00:28:32.332 Device Information : IOPS MiB/s Average min max 00:28:32.332 PCIE (0000:00:06.0) NSID 1 from core 0: 48882.95 572.85 2619.93 1449.11 5525.24 00:28:32.332 ======================================================== 00:28:32.332 Total : 48882.95 572.85 2619.93 1449.11 5525.24 00:28:32.332 00:28:32.332 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.332 ================================================================================= 00:28:32.332 1.00000% : 1765.004us 00:28:32.332 10.00000% : 1966.080us 00:28:32.332 25.00000% : 2189.498us 00:28:32.332 50.00000% : 2621.440us 00:28:32.332 75.00000% : 3038.487us 00:28:32.332 90.00000% : 3306.589us 00:28:32.332 95.00000% : 3440.640us 00:28:32.332 98.00000% : 3589.585us 00:28:32.332 99.00000% : 3693.847us 00:28:32.332 99.50000% : 3798.109us 00:28:32.332 99.90000% : 4706.676us 00:28:32.332 99.99000% : 5481.193us 00:28:32.332 99.99900% : 5540.771us 00:28:32.332 99.99990% : 5540.771us 00:28:32.332 99.99999% : 5540.771us 00:28:32.332 00:28:32.332 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:28:32.332 ============================================================================== 00:28:32.332 Range in us Cumulative IO count 00:28:32.332 1444.771 - 1452.218: 0.0020% ( 1) 00:28:32.332 1474.560 - 1482.007: 0.0041% ( 1) 00:28:32.332 1482.007 - 1489.455: 0.0061% ( 1) 00:28:32.332 1496.902 - 1504.349: 0.0082% ( 1) 00:28:32.332 1504.349 - 1511.796: 0.0164% ( 4) 00:28:32.332 1511.796 - 1519.244: 0.0205% ( 2) 00:28:32.332 1519.244 - 1526.691: 0.0245% ( 2) 00:28:32.332 1526.691 - 1534.138: 0.0286% ( 2) 00:28:32.332 1534.138 - 1541.585: 0.0307% ( 1) 00:28:32.332 1541.585 - 1549.033: 0.0368% ( 3) 00:28:32.332 1556.480 - 1563.927: 0.0389% ( 1) 00:28:32.332 1563.927 - 1571.375: 0.0409% ( 1) 00:28:32.332 1571.375 - 1578.822: 0.0471% ( 3) 00:28:32.332 1578.822 - 1586.269: 0.0511% ( 2) 00:28:32.332 1593.716 - 1601.164: 0.0573% ( 3) 00:28:32.332 1601.164 - 1608.611: 0.0655% ( 4) 00:28:32.332 1608.611 - 1616.058: 0.0716% ( 3) 00:28:32.332 1616.058 - 1623.505: 0.0839% ( 6) 00:28:32.332 1623.505 - 1630.953: 0.1043% ( 10) 00:28:32.332 1630.953 - 1638.400: 0.1187% ( 7) 00:28:32.332 1638.400 - 1645.847: 0.1309% ( 6) 00:28:32.332 1645.847 - 1653.295: 0.1412% ( 5) 00:28:32.332 1653.295 - 1660.742: 0.1677% ( 13) 00:28:32.332 1660.742 - 1668.189: 0.1943% ( 13) 00:28:32.332 1668.189 - 1675.636: 0.2189% ( 12) 00:28:32.332 1675.636 - 1683.084: 0.2680% ( 24) 00:28:32.332 1683.084 - 1690.531: 0.2884% ( 10) 00:28:32.332 1690.531 - 1697.978: 0.3375% ( 24) 00:28:32.332 1697.978 - 1705.425: 0.3846% ( 23) 00:28:32.332 1705.425 - 1712.873: 0.4378% ( 26) 00:28:32.332 1712.873 - 1720.320: 0.5073% ( 34) 00:28:32.332 1720.320 - 1727.767: 0.5810% ( 36) 00:28:32.332 1727.767 - 1735.215: 0.6567% ( 37) 00:28:32.332 1735.215 - 1742.662: 0.7446% ( 43) 00:28:32.332 1742.662 - 1750.109: 0.8531% ( 53) 00:28:32.332 1750.109 - 1757.556: 0.9513% ( 48) 00:28:32.332 1757.556 - 1765.004: 1.1190% ( 82) 00:28:32.332 1765.004 - 1772.451: 1.2867% ( 82) 00:28:32.332 1772.451 - 1779.898: 1.4402% ( 75) 00:28:32.332 1779.898 - 1787.345: 1.6816% ( 118) 00:28:32.332 1787.345 - 1794.793: 1.9045% ( 109) 00:28:32.332 1794.793 - 1802.240: 2.1357% ( 113) 00:28:32.332 1802.240 - 1809.687: 2.3730% ( 116) 00:28:32.332 1809.687 - 1817.135: 2.6205% ( 121) 00:28:32.332 1817.135 - 1824.582: 2.8906% ( 132) 00:28:32.332 1824.582 - 1832.029: 3.1872% ( 145) 00:28:32.332 1832.029 - 1839.476: 3.5145% ( 160) 00:28:32.332 1839.476 - 1846.924: 3.8623% ( 170) 00:28:32.332 1846.924 - 1854.371: 4.2223% ( 176) 00:28:32.332 1854.371 - 1861.818: 4.5537% ( 162) 00:28:32.332 1861.818 - 1869.265: 4.9711% ( 204) 00:28:32.332 1869.265 - 1876.713: 5.3250% ( 173) 00:28:32.332 1876.713 - 1884.160: 5.7771% ( 221) 00:28:32.332 1884.160 - 1891.607: 6.1842% ( 199) 00:28:32.332 1891.607 - 1899.055: 6.6178% ( 212) 00:28:32.332 1899.055 - 1906.502: 7.0781% ( 225) 00:28:32.332 1906.502 - 1921.396: 7.9864% ( 444) 00:28:32.332 1921.396 - 1936.291: 8.9131% ( 453) 00:28:32.332 1936.291 - 1951.185: 9.8030% ( 435) 00:28:32.332 1951.185 - 1966.080: 10.7706% ( 473) 00:28:32.332 1966.080 - 1980.975: 11.7546% ( 481) 00:28:32.332 1980.975 - 1995.869: 12.7509% ( 487) 00:28:32.332 1995.869 - 2010.764: 13.7287% ( 478) 00:28:32.332 2010.764 - 2025.658: 14.6902% ( 470) 00:28:32.332 2025.658 - 2040.553: 15.6394% ( 464) 00:28:32.332 2040.553 - 2055.447: 16.5927% ( 466) 00:28:32.332 2055.447 - 2070.342: 17.5848% ( 485) 00:28:32.332 2070.342 - 2085.236: 18.5791% ( 486) 00:28:32.332 2085.236 - 2100.131: 19.5937% ( 496) 00:28:32.332 2100.131 - 2115.025: 20.5307% ( 458) 00:28:32.332 2115.025 - 2129.920: 21.4942% ( 471) 00:28:32.332 2129.920 - 2144.815: 22.4332% ( 459) 00:28:32.332 2144.815 - 2159.709: 23.3844% ( 465) 00:28:32.332 2159.709 - 2174.604: 24.3357% ( 465) 00:28:32.332 2174.604 - 2189.498: 25.2992% ( 471) 00:28:32.332 2189.498 - 2204.393: 26.2566% ( 468) 00:28:32.332 2204.393 - 2219.287: 27.2242% ( 473) 00:28:32.332 2219.287 - 2234.182: 28.1652% ( 460) 00:28:32.332 2234.182 - 2249.076: 29.1185% ( 466) 00:28:32.332 2249.076 - 2263.971: 30.0759% ( 468) 00:28:32.332 2263.971 - 2278.865: 31.0231% ( 463) 00:28:32.332 2278.865 - 2293.760: 31.9273% ( 442) 00:28:32.332 2293.760 - 2308.655: 32.8499% ( 451) 00:28:32.332 2308.655 - 2323.549: 33.6722% ( 402) 00:28:32.332 2323.549 - 2338.444: 34.5273% ( 418) 00:28:32.332 2338.444 - 2353.338: 35.3661% ( 410) 00:28:32.333 2353.338 - 2368.233: 36.2600% ( 437) 00:28:32.333 2368.233 - 2383.127: 37.0742% ( 398) 00:28:32.333 2383.127 - 2398.022: 37.9334% ( 420) 00:28:32.333 2398.022 - 2412.916: 38.8213% ( 434) 00:28:32.333 2412.916 - 2427.811: 39.6702% ( 415) 00:28:32.333 2427.811 - 2442.705: 40.5151% ( 413) 00:28:32.333 2442.705 - 2457.600: 41.3723% ( 419) 00:28:32.333 2457.600 - 2472.495: 42.2621% ( 435) 00:28:32.333 2472.495 - 2487.389: 43.0866% ( 403) 00:28:32.333 2487.389 - 2502.284: 43.9253% ( 410) 00:28:32.333 2502.284 - 2517.178: 44.7886% ( 422) 00:28:32.333 2517.178 - 2532.073: 45.6191% ( 406) 00:28:32.333 2532.073 - 2546.967: 46.4824% ( 422) 00:28:32.333 2546.967 - 2561.862: 47.3784% ( 438) 00:28:32.333 2561.862 - 2576.756: 48.2192% ( 411) 00:28:32.333 2576.756 - 2591.651: 49.0968% ( 429) 00:28:32.333 2591.651 - 2606.545: 49.9376% ( 411) 00:28:32.333 2606.545 - 2621.440: 50.7845% ( 414) 00:28:32.333 2621.440 - 2636.335: 51.6601% ( 428) 00:28:32.333 2636.335 - 2651.229: 52.5295% ( 425) 00:28:32.333 2651.229 - 2666.124: 53.4317% ( 441) 00:28:32.333 2666.124 - 2681.018: 54.3031% ( 426) 00:28:32.333 2681.018 - 2695.913: 55.1930% ( 435) 00:28:32.333 2695.913 - 2710.807: 56.0665% ( 427) 00:28:32.333 2710.807 - 2725.702: 56.9769% ( 445) 00:28:32.333 2725.702 - 2740.596: 57.8463% ( 425) 00:28:32.333 2740.596 - 2755.491: 58.7464% ( 440) 00:28:32.333 2755.491 - 2770.385: 59.6342% ( 434) 00:28:32.333 2770.385 - 2785.280: 60.5343% ( 440) 00:28:32.333 2785.280 - 2800.175: 61.4201% ( 433) 00:28:32.333 2800.175 - 2815.069: 62.3121% ( 436) 00:28:32.333 2815.069 - 2829.964: 63.2244% ( 446) 00:28:32.333 2829.964 - 2844.858: 64.1123% ( 434) 00:28:32.333 2844.858 - 2859.753: 65.0124% ( 440) 00:28:32.333 2859.753 - 2874.647: 65.9145% ( 441) 00:28:32.333 2874.647 - 2889.542: 66.8515% ( 458) 00:28:32.333 2889.542 - 2904.436: 67.7352% ( 432) 00:28:32.333 2904.436 - 2919.331: 68.6558% ( 450) 00:28:32.333 2919.331 - 2934.225: 69.5702% ( 447) 00:28:32.333 2934.225 - 2949.120: 70.4662% ( 438) 00:28:32.333 2949.120 - 2964.015: 71.3397% ( 427) 00:28:32.333 2964.015 - 2978.909: 72.2378% ( 439) 00:28:32.333 2978.909 - 2993.804: 73.1563% ( 449) 00:28:32.333 2993.804 - 3008.698: 74.0523% ( 438) 00:28:32.333 3008.698 - 3023.593: 74.9299% ( 429) 00:28:32.333 3023.593 - 3038.487: 75.8219% ( 436) 00:28:32.333 3038.487 - 3053.382: 76.7117% ( 435) 00:28:32.333 3053.382 - 3068.276: 77.6200% ( 444) 00:28:32.333 3068.276 - 3083.171: 78.5058% ( 433) 00:28:32.333 3083.171 - 3098.065: 79.4039% ( 439) 00:28:32.333 3098.065 - 3112.960: 80.2979% ( 437) 00:28:32.333 3112.960 - 3127.855: 81.1877% ( 435) 00:28:32.333 3127.855 - 3142.749: 82.0408% ( 417) 00:28:32.333 3142.749 - 3157.644: 82.9327% ( 436) 00:28:32.333 3157.644 - 3172.538: 83.8206% ( 434) 00:28:32.333 3172.538 - 3187.433: 84.6879% ( 424) 00:28:32.333 3187.433 - 3202.327: 85.5287% ( 411) 00:28:32.333 3202.327 - 3217.222: 86.2958% ( 375) 00:28:32.333 3217.222 - 3232.116: 87.1059% ( 396) 00:28:32.333 3232.116 - 3247.011: 87.8485% ( 363) 00:28:32.333 3247.011 - 3261.905: 88.5604% ( 348) 00:28:32.333 3261.905 - 3276.800: 89.2908% ( 357) 00:28:32.333 3276.800 - 3291.695: 89.9658% ( 330) 00:28:32.333 3291.695 - 3306.589: 90.6143% ( 317) 00:28:32.333 3306.589 - 3321.484: 91.2567% ( 314) 00:28:32.333 3321.484 - 3336.378: 91.8561% ( 293) 00:28:32.333 3336.378 - 3351.273: 92.4330% ( 282) 00:28:32.333 3351.273 - 3366.167: 93.0057% ( 280) 00:28:32.333 3366.167 - 3381.062: 93.5233% ( 253) 00:28:32.333 3381.062 - 3395.956: 94.0163% ( 241) 00:28:32.333 3395.956 - 3410.851: 94.4991% ( 236) 00:28:32.333 3410.851 - 3425.745: 94.9471% ( 219) 00:28:32.333 3425.745 - 3440.640: 95.3583% ( 201) 00:28:32.333 3440.640 - 3455.535: 95.7286% ( 181) 00:28:32.333 3455.535 - 3470.429: 96.0968% ( 180) 00:28:32.333 3470.429 - 3485.324: 96.4159% ( 156) 00:28:32.333 3485.324 - 3500.218: 96.7064% ( 142) 00:28:32.333 3500.218 - 3515.113: 96.9867% ( 137) 00:28:32.333 3515.113 - 3530.007: 97.2567% ( 132) 00:28:32.333 3530.007 - 3544.902: 97.4940% ( 116) 00:28:32.333 3544.902 - 3559.796: 97.7088% ( 105) 00:28:32.333 3559.796 - 3574.691: 97.9113% ( 99) 00:28:32.333 3574.691 - 3589.585: 98.0955% ( 90) 00:28:32.333 3589.585 - 3604.480: 98.2734% ( 87) 00:28:32.333 3604.480 - 3619.375: 98.4228% ( 73) 00:28:32.333 3619.375 - 3634.269: 98.5741% ( 74) 00:28:32.333 3634.269 - 3649.164: 98.6989% ( 61) 00:28:32.333 3649.164 - 3664.058: 98.8155% ( 57) 00:28:32.333 3664.058 - 3678.953: 98.9137% ( 48) 00:28:32.333 3678.953 - 3693.847: 99.0140% ( 49) 00:28:32.333 3693.847 - 3708.742: 99.0999% ( 42) 00:28:32.333 3708.742 - 3723.636: 99.1817% ( 40) 00:28:32.333 3723.636 - 3738.531: 99.2595% ( 38) 00:28:32.333 3738.531 - 3753.425: 99.3454% ( 42) 00:28:32.333 3753.425 - 3768.320: 99.4088% ( 31) 00:28:32.333 3768.320 - 3783.215: 99.4620% ( 26) 00:28:32.333 3783.215 - 3798.109: 99.5070% ( 22) 00:28:32.333 3798.109 - 3813.004: 99.5479% ( 20) 00:28:32.333 3813.004 - 3842.793: 99.6011% ( 26) 00:28:32.333 3842.793 - 3872.582: 99.6440% ( 21) 00:28:32.333 3872.582 - 3902.371: 99.6747% ( 15) 00:28:32.333 3902.371 - 3932.160: 99.7075% ( 16) 00:28:32.333 3932.160 - 3961.949: 99.7218% ( 7) 00:28:32.333 3961.949 - 3991.738: 99.7382% ( 8) 00:28:32.333 3991.738 - 4021.527: 99.7504% ( 6) 00:28:32.333 4021.527 - 4051.316: 99.7668% ( 8) 00:28:32.333 4051.316 - 4081.105: 99.7791% ( 6) 00:28:32.333 4081.105 - 4110.895: 99.7995% ( 10) 00:28:32.333 4110.895 - 4140.684: 99.8118% ( 6) 00:28:32.333 4140.684 - 4170.473: 99.8241% ( 6) 00:28:32.333 4170.473 - 4200.262: 99.8302% ( 3) 00:28:32.333 4200.262 - 4230.051: 99.8363% ( 3) 00:28:32.333 4230.051 - 4259.840: 99.8445% ( 4) 00:28:32.333 4259.840 - 4289.629: 99.8507% ( 3) 00:28:32.333 4289.629 - 4319.418: 99.8588% ( 4) 00:28:32.333 4319.418 - 4349.207: 99.8650% ( 3) 00:28:32.333 4349.207 - 4378.996: 99.8691% ( 2) 00:28:32.333 4378.996 - 4408.785: 99.8732% ( 2) 00:28:32.333 4408.785 - 4438.575: 99.8752% ( 1) 00:28:32.333 4438.575 - 4468.364: 99.8773% ( 1) 00:28:32.333 4468.364 - 4498.153: 99.8813% ( 2) 00:28:32.333 4498.153 - 4527.942: 99.8854% ( 2) 00:28:32.333 4527.942 - 4557.731: 99.8875% ( 1) 00:28:32.333 4557.731 - 4587.520: 99.8916% ( 2) 00:28:32.333 4587.520 - 4617.309: 99.8936% ( 1) 00:28:32.334 4617.309 - 4647.098: 99.8957% ( 1) 00:28:32.334 4647.098 - 4676.887: 99.8998% ( 2) 00:28:32.334 4676.887 - 4706.676: 99.9018% ( 1) 00:28:32.334 4706.676 - 4736.465: 99.9039% ( 1) 00:28:32.334 4736.465 - 4766.255: 99.9059% ( 1) 00:28:32.334 4766.255 - 4796.044: 99.9079% ( 1) 00:28:32.334 4796.044 - 4825.833: 99.9120% ( 2) 00:28:32.334 4825.833 - 4855.622: 99.9141% ( 1) 00:28:32.334 4855.622 - 4885.411: 99.9182% ( 2) 00:28:32.334 4885.411 - 4915.200: 99.9202% ( 1) 00:28:32.334 4915.200 - 4944.989: 99.9223% ( 1) 00:28:32.334 4944.989 - 4974.778: 99.9264% ( 2) 00:28:32.334 4974.778 - 5004.567: 99.9284% ( 1) 00:28:32.334 5004.567 - 5034.356: 99.9325% ( 2) 00:28:32.334 5034.356 - 5064.145: 99.9345% ( 1) 00:28:32.334 5064.145 - 5093.935: 99.9366% ( 1) 00:28:32.334 5093.935 - 5123.724: 99.9407% ( 2) 00:28:32.334 5123.724 - 5153.513: 99.9427% ( 1) 00:28:32.334 5153.513 - 5183.302: 99.9448% ( 1) 00:28:32.334 5183.302 - 5213.091: 99.9509% ( 3) 00:28:32.334 5213.091 - 5242.880: 99.9550% ( 2) 00:28:32.334 5242.880 - 5272.669: 99.9591% ( 2) 00:28:32.334 5272.669 - 5302.458: 99.9632% ( 2) 00:28:32.334 5302.458 - 5332.247: 99.9652% ( 1) 00:28:32.334 5332.247 - 5362.036: 99.9673% ( 1) 00:28:32.334 5362.036 - 5391.825: 99.9734% ( 3) 00:28:32.334 5391.825 - 5421.615: 99.9836% ( 5) 00:28:32.334 5421.615 - 5451.404: 99.9898% ( 3) 00:28:32.334 5451.404 - 5481.193: 99.9959% ( 3) 00:28:32.334 5481.193 - 5510.982: 99.9980% ( 1) 00:28:32.334 5510.982 - 5540.771: 100.0000% ( 1) 00:28:32.334 00:28:32.334 21:50:52 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:28:32.334 00:28:32.334 real 0m2.652s 00:28:32.334 user 0m2.253s 00:28:32.334 sys 0m0.311s 00:28:32.334 21:50:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:32.334 21:50:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.334 ************************************ 00:28:32.334 END TEST nvme_perf 00:28:32.334 ************************************ 00:28:32.334 21:50:52 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:32.334 21:50:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:32.334 21:50:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.334 21:50:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.334 ************************************ 00:28:32.334 START TEST nvme_hello_world 00:28:32.334 ************************************ 00:28:32.334 21:50:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:28:32.593 Initializing NVMe Controllers 00:28:32.593 Attached to 0000:00:06.0 00:28:32.593 Namespace ID: 1 size: 5GB 00:28:32.593 Initialization complete. 00:28:32.593 INFO: using host memory buffer for IO 00:28:32.593 Hello world! 00:28:32.593 00:28:32.593 real 0m0.318s 00:28:32.593 user 0m0.109s 00:28:32.593 sys 0m0.164s 00:28:32.593 21:50:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:32.593 21:50:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.593 ************************************ 00:28:32.593 END TEST nvme_hello_world 00:28:32.593 ************************************ 00:28:32.593 21:50:52 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:32.593 21:50:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:32.593 21:50:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:32.593 21:50:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.593 ************************************ 00:28:32.593 START TEST nvme_sgl 00:28:32.593 ************************************ 00:28:32.593 21:50:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:28:32.851 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:28:32.851 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:28:32.851 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:28:32.851 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:28:32.851 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:28:32.851 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:28:32.851 NVMe Readv/Writev Request test 00:28:32.851 Attached to 0000:00:06.0 00:28:32.851 0000:00:06.0: build_io_request_2 test passed 00:28:32.851 0000:00:06.0: build_io_request_4 test passed 00:28:32.851 0000:00:06.0: build_io_request_5 test passed 00:28:32.851 0000:00:06.0: build_io_request_6 test passed 00:28:32.851 0000:00:06.0: build_io_request_7 test passed 00:28:32.851 0000:00:06.0: build_io_request_10 test passed 00:28:32.851 Cleaning up... 00:28:33.110 00:28:33.110 real 0m0.393s 00:28:33.110 user 0m0.208s 00:28:33.110 sys 0m0.141s 00:28:33.110 21:50:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.110 21:50:53 -- common/autotest_common.sh@10 -- # set +x 00:28:33.110 ************************************ 00:28:33.110 END TEST nvme_sgl 00:28:33.110 ************************************ 00:28:33.110 21:50:53 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:33.110 21:50:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:33.110 21:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.110 21:50:53 -- common/autotest_common.sh@10 -- # set +x 00:28:33.110 ************************************ 00:28:33.110 START TEST nvme_e2edp 00:28:33.110 ************************************ 00:28:33.110 21:50:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:28:33.370 NVMe Write/Read with End-to-End data protection test 00:28:33.370 Attached to 0000:00:06.0 00:28:33.370 Cleaning up... 00:28:33.370 00:28:33.370 real 0m0.309s 00:28:33.370 user 0m0.133s 00:28:33.370 sys 0m0.129s 00:28:33.370 21:50:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.370 21:50:53 -- common/autotest_common.sh@10 -- # set +x 00:28:33.370 ************************************ 00:28:33.370 END TEST nvme_e2edp 00:28:33.370 ************************************ 00:28:33.370 21:50:53 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:33.370 21:50:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:33.370 21:50:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.370 21:50:53 -- common/autotest_common.sh@10 -- # set +x 00:28:33.370 ************************************ 00:28:33.370 START TEST nvme_reserve 00:28:33.370 ************************************ 00:28:33.370 21:50:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:28:33.629 ===================================================== 00:28:33.629 NVMe Controller at PCI bus 0, device 6, function 0 00:28:33.629 ===================================================== 00:28:33.629 Reservations: Not Supported 00:28:33.629 Reservation test passed 00:28:33.629 00:28:33.629 real 0m0.298s 00:28:33.629 user 0m0.113s 00:28:33.629 sys 0m0.143s 00:28:33.629 21:50:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.629 21:50:54 -- common/autotest_common.sh@10 -- # set +x 00:28:33.629 ************************************ 00:28:33.629 END TEST nvme_reserve 00:28:33.629 ************************************ 00:28:33.629 21:50:54 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:33.629 21:50:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:33.629 21:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.629 21:50:54 -- common/autotest_common.sh@10 -- # set +x 00:28:33.629 ************************************ 00:28:33.629 START TEST nvme_err_injection 00:28:33.629 ************************************ 00:28:33.629 21:50:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:28:33.888 NVMe Error Injection test 00:28:33.888 Attached to 0000:00:06.0 00:28:33.888 0000:00:06.0: get features failed as expected 00:28:33.888 0000:00:06.0: get features successfully as expected 00:28:33.888 0000:00:06.0: read failed as expected 00:28:33.888 0000:00:06.0: read successfully as expected 00:28:33.888 Cleaning up... 00:28:34.147 00:28:34.147 real 0m0.283s 00:28:34.147 user 0m0.106s 00:28:34.147 sys 0m0.126s 00:28:34.147 21:50:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:34.147 21:50:54 -- common/autotest_common.sh@10 -- # set +x 00:28:34.147 ************************************ 00:28:34.147 END TEST nvme_err_injection 00:28:34.147 ************************************ 00:28:34.147 21:50:54 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:34.147 21:50:54 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:28:34.147 21:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:34.147 21:50:54 -- common/autotest_common.sh@10 -- # set +x 00:28:34.147 ************************************ 00:28:34.147 START TEST nvme_overhead 00:28:34.147 ************************************ 00:28:34.147 21:50:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:28:35.523 Initializing NVMe Controllers 00:28:35.523 Attached to 0000:00:06.0 00:28:35.523 Initialization complete. Launching workers. 00:28:35.523 submit (in ns) avg, min, max = 16720.4, 12988.6, 95475.9 00:28:35.523 complete (in ns) avg, min, max = 12876.0, 8802.7, 531926.8 00:28:35.523 00:28:35.523 Submit histogram 00:28:35.523 ================ 00:28:35.523 Range in us Cumulative Count 00:28:35.523 12.975 - 13.033: 0.0116% ( 1) 00:28:35.523 13.033 - 13.091: 0.0348% ( 2) 00:28:35.523 13.091 - 13.149: 0.0696% ( 3) 00:28:35.523 13.149 - 13.207: 0.2668% ( 17) 00:28:35.523 13.207 - 13.265: 0.5335% ( 23) 00:28:35.523 13.265 - 13.324: 1.0206% ( 42) 00:28:35.523 13.324 - 13.382: 1.8673% ( 73) 00:28:35.523 13.382 - 13.440: 2.5516% ( 59) 00:28:35.523 13.440 - 13.498: 3.3287% ( 67) 00:28:35.523 13.498 - 13.556: 4.7901% ( 126) 00:28:35.523 13.556 - 13.615: 7.2257% ( 210) 00:28:35.523 13.615 - 13.673: 10.3224% ( 267) 00:28:35.523 13.673 - 13.731: 14.3354% ( 346) 00:28:35.523 13.731 - 13.789: 18.3020% ( 342) 00:28:35.523 13.789 - 13.847: 21.4103% ( 268) 00:28:35.523 13.847 - 13.905: 23.9736% ( 221) 00:28:35.523 13.905 - 13.964: 26.5600% ( 223) 00:28:35.523 13.964 - 14.022: 30.1902% ( 313) 00:28:35.523 14.022 - 14.080: 34.0640% ( 334) 00:28:35.523 14.080 - 14.138: 38.0422% ( 343) 00:28:35.523 14.138 - 14.196: 41.4637% ( 295) 00:28:35.523 14.196 - 14.255: 43.9921% ( 218) 00:28:35.523 14.255 - 14.313: 46.1958% ( 190) 00:28:35.523 14.313 - 14.371: 48.3531% ( 186) 00:28:35.523 14.371 - 14.429: 50.6379% ( 197) 00:28:35.523 14.429 - 14.487: 52.9923% ( 203) 00:28:35.523 14.487 - 14.545: 55.2192% ( 192) 00:28:35.523 14.545 - 14.604: 57.0749% ( 160) 00:28:35.523 14.604 - 14.662: 58.5827% ( 130) 00:28:35.523 14.662 - 14.720: 60.1601% ( 136) 00:28:35.523 14.720 - 14.778: 61.7142% ( 134) 00:28:35.523 14.778 - 14.836: 63.8019% ( 180) 00:28:35.523 14.836 - 14.895: 65.9360% ( 184) 00:28:35.523 14.895 - 15.011: 70.3085% ( 377) 00:28:35.523 15.011 - 15.127: 73.3473% ( 262) 00:28:35.523 15.127 - 15.244: 75.3074% ( 169) 00:28:35.523 15.244 - 15.360: 76.7223% ( 122) 00:28:35.523 15.360 - 15.476: 78.1721% ( 125) 00:28:35.523 15.476 - 15.593: 79.3551% ( 102) 00:28:35.523 15.593 - 15.709: 80.1438% ( 68) 00:28:35.523 15.709 - 15.825: 80.7933% ( 56) 00:28:35.523 15.825 - 15.942: 81.4196% ( 54) 00:28:35.523 15.942 - 16.058: 81.9531% ( 46) 00:28:35.523 16.058 - 16.175: 82.2547% ( 26) 00:28:35.523 16.175 - 16.291: 82.4867% ( 20) 00:28:35.523 16.291 - 16.407: 82.7186% ( 20) 00:28:35.523 16.407 - 16.524: 82.8926% ( 15) 00:28:35.523 16.524 - 16.640: 83.0666% ( 15) 00:28:35.523 16.640 - 16.756: 83.1710% ( 9) 00:28:35.524 16.756 - 16.873: 83.2869% ( 10) 00:28:35.524 16.873 - 16.989: 83.3565% ( 6) 00:28:35.524 16.989 - 17.105: 83.4145% ( 5) 00:28:35.524 17.105 - 17.222: 83.4609% ( 4) 00:28:35.524 17.222 - 17.338: 83.4841% ( 2) 00:28:35.524 17.338 - 17.455: 83.5421% ( 5) 00:28:35.524 17.455 - 17.571: 83.5537% ( 1) 00:28:35.524 17.571 - 17.687: 83.6117% ( 5) 00:28:35.524 17.687 - 17.804: 83.6465% ( 3) 00:28:35.524 17.804 - 17.920: 83.6929% ( 4) 00:28:35.524 17.920 - 18.036: 83.7161% ( 2) 00:28:35.524 18.036 - 18.153: 83.7509% ( 3) 00:28:35.524 18.153 - 18.269: 83.7625% ( 1) 00:28:35.524 18.269 - 18.385: 83.8089% ( 4) 00:28:35.524 18.385 - 18.502: 83.8205% ( 1) 00:28:35.524 18.502 - 18.618: 83.8321% ( 1) 00:28:35.524 18.618 - 18.735: 83.8553% ( 2) 00:28:35.524 18.735 - 18.851: 83.8669% ( 1) 00:28:35.524 18.851 - 18.967: 83.8785% ( 1) 00:28:35.524 18.967 - 19.084: 83.9016% ( 2) 00:28:35.524 19.084 - 19.200: 83.9248% ( 2) 00:28:35.524 19.200 - 19.316: 84.0176% ( 8) 00:28:35.524 19.316 - 19.433: 84.0408% ( 2) 00:28:35.524 19.433 - 19.549: 84.0988% ( 5) 00:28:35.524 19.549 - 19.665: 84.2264% ( 11) 00:28:35.524 19.665 - 19.782: 84.3076% ( 7) 00:28:35.524 19.782 - 19.898: 84.4120% ( 9) 00:28:35.524 19.898 - 20.015: 84.4932% ( 7) 00:28:35.524 20.015 - 20.131: 84.5975% ( 9) 00:28:35.524 20.131 - 20.247: 84.7367% ( 12) 00:28:35.524 20.247 - 20.364: 84.8527% ( 10) 00:28:35.524 20.364 - 20.480: 84.9571% ( 9) 00:28:35.524 20.480 - 20.596: 85.0383% ( 7) 00:28:35.524 20.596 - 20.713: 85.1543% ( 10) 00:28:35.524 20.713 - 20.829: 85.2470% ( 8) 00:28:35.524 20.829 - 20.945: 85.3282% ( 7) 00:28:35.524 20.945 - 21.062: 85.4906% ( 14) 00:28:35.524 21.062 - 21.178: 85.5834% ( 8) 00:28:35.524 21.178 - 21.295: 85.6994% ( 10) 00:28:35.524 21.295 - 21.411: 85.8154% ( 10) 00:28:35.524 21.411 - 21.527: 85.8502% ( 3) 00:28:35.524 21.527 - 21.644: 85.8733% ( 2) 00:28:35.524 21.644 - 21.760: 85.9313% ( 5) 00:28:35.524 21.760 - 21.876: 86.0241% ( 8) 00:28:35.524 21.876 - 21.993: 86.1169% ( 8) 00:28:35.524 21.993 - 22.109: 86.1517% ( 3) 00:28:35.524 22.109 - 22.225: 86.2097% ( 5) 00:28:35.524 22.225 - 22.342: 86.2677% ( 5) 00:28:35.524 22.342 - 22.458: 86.3605% ( 8) 00:28:35.524 22.458 - 22.575: 86.4069% ( 4) 00:28:35.524 22.575 - 22.691: 86.4417% ( 3) 00:28:35.524 22.691 - 22.807: 86.5228% ( 7) 00:28:35.524 22.807 - 22.924: 86.5460% ( 2) 00:28:35.524 22.924 - 23.040: 86.5808% ( 3) 00:28:35.524 23.156 - 23.273: 86.6040% ( 2) 00:28:35.524 23.273 - 23.389: 86.6504% ( 4) 00:28:35.524 23.389 - 23.505: 86.7084% ( 5) 00:28:35.524 23.505 - 23.622: 86.7200% ( 1) 00:28:35.524 23.738 - 23.855: 86.7432% ( 2) 00:28:35.524 23.855 - 23.971: 86.7780% ( 3) 00:28:35.524 23.971 - 24.087: 86.8360% ( 5) 00:28:35.524 24.087 - 24.204: 86.8592% ( 2) 00:28:35.524 24.204 - 24.320: 86.8824% ( 2) 00:28:35.524 24.320 - 24.436: 86.8940% ( 1) 00:28:35.524 24.553 - 24.669: 86.9056% ( 1) 00:28:35.524 24.669 - 24.785: 86.9404% ( 3) 00:28:35.524 24.785 - 24.902: 86.9636% ( 2) 00:28:35.524 24.902 - 25.018: 86.9984% ( 3) 00:28:35.524 25.018 - 25.135: 87.0100% ( 1) 00:28:35.524 25.251 - 25.367: 87.0216% ( 1) 00:28:35.524 25.367 - 25.484: 87.0448% ( 2) 00:28:35.524 25.484 - 25.600: 87.0564% ( 1) 00:28:35.524 25.600 - 25.716: 87.0912% ( 3) 00:28:35.524 25.716 - 25.833: 87.1028% ( 1) 00:28:35.524 25.833 - 25.949: 87.1144% ( 1) 00:28:35.524 25.949 - 26.065: 87.1376% ( 2) 00:28:35.524 26.065 - 26.182: 87.1492% ( 1) 00:28:35.524 26.182 - 26.298: 87.1955% ( 4) 00:28:35.524 26.298 - 26.415: 87.2419% ( 4) 00:28:35.524 26.415 - 26.531: 87.2535% ( 1) 00:28:35.524 26.531 - 26.647: 87.2767% ( 2) 00:28:35.524 26.996 - 27.113: 87.2999% ( 2) 00:28:35.524 27.345 - 27.462: 87.3115% ( 1) 00:28:35.524 27.462 - 27.578: 87.3231% ( 1) 00:28:35.524 27.578 - 27.695: 87.3347% ( 1) 00:28:35.524 27.695 - 27.811: 87.3811% ( 4) 00:28:35.524 27.811 - 27.927: 87.4043% ( 2) 00:28:35.524 27.927 - 28.044: 87.4623% ( 5) 00:28:35.524 28.044 - 28.160: 87.5667% ( 9) 00:28:35.524 28.160 - 28.276: 87.9378% ( 32) 00:28:35.524 28.276 - 28.393: 88.6685% ( 63) 00:28:35.524 28.393 - 28.509: 89.4572% ( 68) 00:28:35.524 28.509 - 28.625: 90.3967% ( 81) 00:28:35.524 28.625 - 28.742: 91.1389% ( 64) 00:28:35.524 28.742 - 28.858: 91.9508% ( 70) 00:28:35.524 28.858 - 28.975: 92.7511% ( 69) 00:28:35.524 28.975 - 29.091: 93.5398% ( 68) 00:28:35.524 29.091 - 29.207: 94.3749% ( 72) 00:28:35.524 29.207 - 29.324: 94.9780% ( 52) 00:28:35.524 29.324 - 29.440: 95.4535% ( 41) 00:28:35.524 29.440 - 29.556: 96.0682% ( 53) 00:28:35.524 29.556 - 29.673: 96.3698% ( 26) 00:28:35.524 29.673 - 29.789: 96.6249% ( 22) 00:28:35.524 29.789 - 30.022: 97.0656% ( 38) 00:28:35.524 30.022 - 30.255: 97.3208% ( 22) 00:28:35.524 30.255 - 30.487: 97.5644% ( 21) 00:28:35.524 30.487 - 30.720: 97.6920% ( 11) 00:28:35.524 30.720 - 30.953: 97.8891% ( 17) 00:28:35.524 30.953 - 31.185: 97.9355% ( 4) 00:28:35.524 31.185 - 31.418: 97.9819% ( 4) 00:28:35.524 31.418 - 31.651: 98.0631% ( 7) 00:28:35.524 31.651 - 31.884: 98.0979% ( 3) 00:28:35.524 31.884 - 32.116: 98.1327% ( 3) 00:28:35.524 32.116 - 32.349: 98.1675% ( 3) 00:28:35.524 32.349 - 32.582: 98.1791% ( 1) 00:28:35.524 32.582 - 32.815: 98.2023% ( 2) 00:28:35.524 32.815 - 33.047: 98.2139% ( 1) 00:28:35.524 33.047 - 33.280: 98.2255% ( 1) 00:28:35.524 33.280 - 33.513: 98.2835% ( 5) 00:28:35.524 33.513 - 33.745: 98.2951% ( 1) 00:28:35.524 33.745 - 33.978: 98.3067% ( 1) 00:28:35.524 33.978 - 34.211: 98.3299% ( 2) 00:28:35.524 34.211 - 34.444: 98.3646% ( 3) 00:28:35.524 34.444 - 34.676: 98.4342% ( 6) 00:28:35.524 34.676 - 34.909: 98.4922% ( 5) 00:28:35.524 34.909 - 35.142: 98.5618% ( 6) 00:28:35.524 35.142 - 35.375: 98.6082% ( 4) 00:28:35.524 35.375 - 35.607: 98.6430% ( 3) 00:28:35.524 35.607 - 35.840: 98.7358% ( 8) 00:28:35.524 35.840 - 36.073: 98.7938% ( 5) 00:28:35.524 36.073 - 36.305: 98.8286% ( 3) 00:28:35.524 36.305 - 36.538: 98.8750% ( 4) 00:28:35.524 36.538 - 36.771: 98.8982% ( 2) 00:28:35.524 36.771 - 37.004: 98.9330% ( 3) 00:28:35.524 37.004 - 37.236: 98.9562% ( 2) 00:28:35.524 37.236 - 37.469: 98.9794% ( 2) 00:28:35.524 37.469 - 37.702: 99.0373% ( 5) 00:28:35.524 37.702 - 37.935: 99.0489% ( 1) 00:28:35.524 37.935 - 38.167: 99.0837% ( 3) 00:28:35.524 38.400 - 38.633: 99.0953% ( 1) 00:28:35.524 38.633 - 38.865: 99.1069% ( 1) 00:28:35.524 38.865 - 39.098: 99.1185% ( 1) 00:28:35.524 39.098 - 39.331: 99.1533% ( 3) 00:28:35.524 39.331 - 39.564: 99.1649% ( 1) 00:28:35.524 40.029 - 40.262: 99.1997% ( 3) 00:28:35.524 40.262 - 40.495: 99.2113% ( 1) 00:28:35.524 40.495 - 40.727: 99.2345% ( 2) 00:28:35.524 40.960 - 41.193: 99.2461% ( 1) 00:28:35.524 41.193 - 41.425: 99.2693% ( 2) 00:28:35.524 41.658 - 41.891: 99.2809% ( 1) 00:28:35.524 42.356 - 42.589: 99.2925% ( 1) 00:28:35.524 42.589 - 42.822: 99.3041% ( 1) 00:28:35.524 43.055 - 43.287: 99.3157% ( 1) 00:28:35.524 43.287 - 43.520: 99.3389% ( 2) 00:28:35.524 43.520 - 43.753: 99.3621% ( 2) 00:28:35.524 43.753 - 43.985: 99.4549% ( 8) 00:28:35.524 43.985 - 44.218: 99.4665% ( 1) 00:28:35.524 44.218 - 44.451: 99.5477% ( 7) 00:28:35.524 44.451 - 44.684: 99.5709% ( 2) 00:28:35.524 44.684 - 44.916: 99.6173% ( 4) 00:28:35.524 44.916 - 45.149: 99.6405% ( 2) 00:28:35.524 45.149 - 45.382: 99.6521% ( 1) 00:28:35.524 45.382 - 45.615: 99.6868% ( 3) 00:28:35.524 45.615 - 45.847: 99.6984% ( 1) 00:28:35.524 45.847 - 46.080: 99.7100% ( 1) 00:28:35.524 46.313 - 46.545: 99.7216% ( 1) 00:28:35.524 46.778 - 47.011: 99.7332% ( 1) 00:28:35.524 47.476 - 47.709: 99.7448% ( 1) 00:28:35.524 47.709 - 47.942: 99.7564% ( 1) 00:28:35.524 48.175 - 48.407: 99.7680% ( 1) 00:28:35.524 48.640 - 48.873: 99.7796% ( 1) 00:28:35.524 49.105 - 49.338: 99.7912% ( 1) 00:28:35.524 50.269 - 50.502: 99.8028% ( 1) 00:28:35.524 51.433 - 51.665: 99.8144% ( 1) 00:28:35.524 51.665 - 51.898: 99.8376% ( 2) 00:28:35.524 52.131 - 52.364: 99.8492% ( 1) 00:28:35.524 52.364 - 52.596: 99.8724% ( 2) 00:28:35.524 52.829 - 53.062: 99.8956% ( 2) 00:28:35.524 53.295 - 53.527: 99.9072% ( 1) 00:28:35.524 54.225 - 54.458: 99.9188% ( 1) 00:28:35.524 55.389 - 55.622: 99.9304% ( 1) 00:28:35.524 56.087 - 56.320: 99.9420% ( 1) 00:28:35.524 59.578 - 60.044: 99.9536% ( 1) 00:28:35.524 64.698 - 65.164: 99.9652% ( 1) 00:28:35.524 79.127 - 79.593: 99.9768% ( 1) 00:28:35.524 88.902 - 89.367: 99.9884% ( 1) 00:28:35.525 95.418 - 95.884: 100.0000% ( 1) 00:28:35.525 00:28:35.525 Complete histogram 00:28:35.525 ================== 00:28:35.525 Range in us Cumulative Count 00:28:35.525 8.785 - 8.844: 0.0348% ( 3) 00:28:35.525 8.844 - 8.902: 0.1392% ( 9) 00:28:35.525 8.902 - 8.960: 0.4059% ( 23) 00:28:35.525 8.960 - 9.018: 0.5567% ( 13) 00:28:35.525 9.018 - 9.076: 0.8235% ( 23) 00:28:35.525 9.076 - 9.135: 1.3338% ( 44) 00:28:35.525 9.135 - 9.193: 2.5052% ( 101) 00:28:35.525 9.193 - 9.251: 4.8365% ( 201) 00:28:35.525 9.251 - 9.309: 7.3881% ( 220) 00:28:35.525 9.309 - 9.367: 10.0209% ( 227) 00:28:35.525 9.367 - 9.425: 12.3173% ( 198) 00:28:35.525 9.425 - 9.484: 15.4605% ( 271) 00:28:35.525 9.484 - 9.542: 19.5662% ( 354) 00:28:35.525 9.542 - 9.600: 23.3009% ( 322) 00:28:35.525 9.600 - 9.658: 26.4904% ( 275) 00:28:35.525 9.658 - 9.716: 28.9376% ( 211) 00:28:35.525 9.716 - 9.775: 32.3823% ( 297) 00:28:35.525 9.775 - 9.833: 36.5344% ( 358) 00:28:35.525 9.833 - 9.891: 40.3271% ( 327) 00:28:35.525 9.891 - 9.949: 43.0643% ( 236) 00:28:35.525 9.949 - 10.007: 44.8388% ( 153) 00:28:35.525 10.007 - 10.065: 46.9381% ( 181) 00:28:35.525 10.065 - 10.124: 50.0580% ( 269) 00:28:35.525 10.124 - 10.182: 54.1986% ( 357) 00:28:35.525 10.182 - 10.240: 57.9100% ( 320) 00:28:35.525 10.240 - 10.298: 61.0067% ( 267) 00:28:35.525 10.298 - 10.356: 62.7117% ( 147) 00:28:35.525 10.356 - 10.415: 64.1151% ( 121) 00:28:35.525 10.415 - 10.473: 65.5996% ( 128) 00:28:35.525 10.473 - 10.531: 66.8174% ( 105) 00:28:35.525 10.531 - 10.589: 67.7917% ( 84) 00:28:35.525 10.589 - 10.647: 68.6036% ( 70) 00:28:35.525 10.647 - 10.705: 69.3459% ( 64) 00:28:35.525 10.705 - 10.764: 69.7982% ( 39) 00:28:35.525 10.764 - 10.822: 70.2737% ( 41) 00:28:35.525 10.822 - 10.880: 70.8768% ( 52) 00:28:35.525 10.880 - 10.938: 71.3292% ( 39) 00:28:35.525 10.938 - 10.996: 71.7699% ( 38) 00:28:35.525 10.996 - 11.055: 72.2338% ( 40) 00:28:35.525 11.055 - 11.113: 72.6862% ( 39) 00:28:35.525 11.113 - 11.171: 73.2429% ( 48) 00:28:35.525 11.171 - 11.229: 73.5908% ( 30) 00:28:35.525 11.229 - 11.287: 74.0199% ( 37) 00:28:35.525 11.287 - 11.345: 74.3911% ( 32) 00:28:35.525 11.345 - 11.404: 74.7738% ( 33) 00:28:35.525 11.404 - 11.462: 75.1450% ( 32) 00:28:35.525 11.462 - 11.520: 75.3653% ( 19) 00:28:35.525 11.520 - 11.578: 75.7133% ( 30) 00:28:35.525 11.578 - 11.636: 76.0148% ( 26) 00:28:35.525 11.636 - 11.695: 76.2932% ( 24) 00:28:35.525 11.695 - 11.753: 76.5948% ( 26) 00:28:35.525 11.753 - 11.811: 76.8035% ( 18) 00:28:35.525 11.811 - 11.869: 77.1515% ( 30) 00:28:35.525 11.869 - 11.927: 77.2791% ( 11) 00:28:35.525 11.927 - 11.985: 77.6734% ( 34) 00:28:35.525 11.985 - 12.044: 78.0213% ( 30) 00:28:35.525 12.044 - 12.102: 78.3577% ( 29) 00:28:35.525 12.102 - 12.160: 78.5781% ( 19) 00:28:35.525 12.160 - 12.218: 78.7636% ( 16) 00:28:35.525 12.218 - 12.276: 78.9608% ( 17) 00:28:35.525 12.276 - 12.335: 79.1928% ( 20) 00:28:35.525 12.335 - 12.393: 79.4479% ( 22) 00:28:35.525 12.393 - 12.451: 79.6567% ( 18) 00:28:35.525 12.451 - 12.509: 79.8191% ( 14) 00:28:35.525 12.509 - 12.567: 80.0162% ( 17) 00:28:35.525 12.567 - 12.625: 80.0858% ( 6) 00:28:35.525 12.625 - 12.684: 80.1090% ( 2) 00:28:35.525 12.684 - 12.742: 80.2134% ( 9) 00:28:35.525 12.742 - 12.800: 80.3062% ( 8) 00:28:35.525 12.800 - 12.858: 80.3758% ( 6) 00:28:35.525 12.858 - 12.916: 80.4454% ( 6) 00:28:35.525 12.916 - 12.975: 80.6193% ( 15) 00:28:35.525 12.975 - 13.033: 80.7005% ( 7) 00:28:35.525 13.033 - 13.091: 80.8281% ( 11) 00:28:35.525 13.091 - 13.149: 80.8977% ( 6) 00:28:35.525 13.149 - 13.207: 80.9673% ( 6) 00:28:35.525 13.207 - 13.265: 81.0253% ( 5) 00:28:35.525 13.265 - 13.324: 81.0601% ( 3) 00:28:35.525 13.324 - 13.382: 81.0949% ( 3) 00:28:35.525 13.382 - 13.440: 81.1297% ( 3) 00:28:35.525 13.440 - 13.498: 81.1529% ( 2) 00:28:35.525 13.498 - 13.556: 81.1877% ( 3) 00:28:35.525 13.556 - 13.615: 81.1993% ( 1) 00:28:35.525 13.615 - 13.673: 81.2109% ( 1) 00:28:35.525 13.673 - 13.731: 81.2225% ( 1) 00:28:35.525 13.789 - 13.847: 81.2457% ( 2) 00:28:35.525 13.847 - 13.905: 81.2572% ( 1) 00:28:35.525 13.964 - 14.022: 81.2804% ( 2) 00:28:35.525 14.022 - 14.080: 81.3036% ( 2) 00:28:35.525 14.138 - 14.196: 81.3268% ( 2) 00:28:35.525 14.196 - 14.255: 81.3384% ( 1) 00:28:35.525 14.255 - 14.313: 81.3616% ( 2) 00:28:35.525 14.313 - 14.371: 81.3964% ( 3) 00:28:35.525 14.371 - 14.429: 81.4080% ( 1) 00:28:35.525 14.429 - 14.487: 81.4196% ( 1) 00:28:35.525 14.545 - 14.604: 81.4312% ( 1) 00:28:35.525 14.604 - 14.662: 81.4428% ( 1) 00:28:35.525 14.662 - 14.720: 81.4660% ( 2) 00:28:35.525 14.720 - 14.778: 81.5008% ( 3) 00:28:35.525 14.778 - 14.836: 81.5240% ( 2) 00:28:35.525 14.836 - 14.895: 81.5356% ( 1) 00:28:35.525 14.895 - 15.011: 81.5704% ( 3) 00:28:35.525 15.011 - 15.127: 81.6168% ( 4) 00:28:35.525 15.127 - 15.244: 81.6516% ( 3) 00:28:35.525 15.244 - 15.360: 81.6980% ( 4) 00:28:35.525 15.360 - 15.476: 81.7328% ( 3) 00:28:35.525 15.476 - 15.593: 81.7560% ( 2) 00:28:35.525 15.593 - 15.709: 81.8140% ( 5) 00:28:35.525 15.709 - 15.825: 81.8836% ( 6) 00:28:35.525 15.825 - 15.942: 81.9531% ( 6) 00:28:35.525 15.942 - 16.058: 82.0343% ( 7) 00:28:35.525 16.058 - 16.175: 82.0807% ( 4) 00:28:35.525 16.175 - 16.291: 82.1503% ( 6) 00:28:35.525 16.291 - 16.407: 82.2663% ( 10) 00:28:35.525 16.407 - 16.524: 82.3475% ( 7) 00:28:35.525 16.524 - 16.640: 82.4635% ( 10) 00:28:35.525 16.640 - 16.756: 82.5563% ( 8) 00:28:35.525 16.756 - 16.873: 82.6258% ( 6) 00:28:35.525 16.873 - 16.989: 82.7302% ( 9) 00:28:35.525 16.989 - 17.105: 82.7882% ( 5) 00:28:35.525 17.105 - 17.222: 82.8230% ( 3) 00:28:35.525 17.222 - 17.338: 82.8578% ( 3) 00:28:35.525 17.338 - 17.455: 83.0202% ( 14) 00:28:35.525 17.455 - 17.571: 83.0666% ( 4) 00:28:35.525 17.571 - 17.687: 83.1478% ( 7) 00:28:35.525 17.687 - 17.804: 83.2058% ( 5) 00:28:35.525 17.804 - 17.920: 83.2869% ( 7) 00:28:35.525 17.920 - 18.036: 83.3449% ( 5) 00:28:35.525 18.036 - 18.153: 83.4029% ( 5) 00:28:35.525 18.153 - 18.269: 83.4377% ( 3) 00:28:35.525 18.269 - 18.385: 83.4725% ( 3) 00:28:35.525 18.385 - 18.502: 83.5073% ( 3) 00:28:35.525 18.502 - 18.618: 83.5305% ( 2) 00:28:35.525 18.618 - 18.735: 83.5769% ( 4) 00:28:35.525 18.851 - 18.967: 83.6465% ( 6) 00:28:35.525 18.967 - 19.084: 83.6697% ( 2) 00:28:35.525 19.084 - 19.200: 83.6929% ( 2) 00:28:35.525 19.316 - 19.433: 83.7161% ( 2) 00:28:35.525 19.433 - 19.549: 83.7393% ( 2) 00:28:35.525 19.549 - 19.665: 83.7625% ( 2) 00:28:35.525 19.665 - 19.782: 83.7857% ( 2) 00:28:35.525 19.782 - 19.898: 83.8205% ( 3) 00:28:35.525 19.898 - 20.015: 83.8321% ( 1) 00:28:35.525 20.015 - 20.131: 83.8437% ( 1) 00:28:35.525 20.131 - 20.247: 83.8553% ( 1) 00:28:35.525 20.247 - 20.364: 83.8785% ( 2) 00:28:35.525 20.364 - 20.480: 83.9132% ( 3) 00:28:35.525 20.596 - 20.713: 83.9248% ( 1) 00:28:35.525 20.713 - 20.829: 83.9596% ( 3) 00:28:35.525 20.829 - 20.945: 83.9712% ( 1) 00:28:35.525 20.945 - 21.062: 83.9828% ( 1) 00:28:35.525 21.178 - 21.295: 84.0176% ( 3) 00:28:35.525 21.295 - 21.411: 84.0408% ( 2) 00:28:35.525 21.411 - 21.527: 84.0524% ( 1) 00:28:35.525 21.527 - 21.644: 84.0640% ( 1) 00:28:35.525 21.644 - 21.760: 84.0756% ( 1) 00:28:35.525 21.760 - 21.876: 84.1220% ( 4) 00:28:35.525 21.876 - 21.993: 84.1568% ( 3) 00:28:35.525 22.458 - 22.575: 84.1684% ( 1) 00:28:35.525 22.924 - 23.040: 84.1800% ( 1) 00:28:35.525 23.273 - 23.389: 84.2032% ( 2) 00:28:35.525 23.389 - 23.505: 84.2728% ( 6) 00:28:35.525 23.505 - 23.622: 84.3308% ( 5) 00:28:35.525 23.622 - 23.738: 84.5859% ( 22) 00:28:35.525 23.738 - 23.855: 85.1079% ( 45) 00:28:35.525 23.855 - 23.971: 85.7342% ( 54) 00:28:35.525 23.971 - 24.087: 86.8244% ( 94) 00:28:35.525 24.087 - 24.204: 87.8798% ( 91) 00:28:35.525 24.204 - 24.320: 89.2948% ( 122) 00:28:35.525 24.320 - 24.436: 90.4547% ( 100) 00:28:35.525 24.436 - 24.553: 91.7537% ( 112) 00:28:35.525 24.553 - 24.669: 92.5655% ( 70) 00:28:35.525 24.669 - 24.785: 93.1802% ( 53) 00:28:35.525 24.785 - 24.902: 93.9225% ( 64) 00:28:35.525 24.902 - 25.018: 94.6300% ( 61) 00:28:35.525 25.018 - 25.135: 95.0592% ( 37) 00:28:35.525 25.135 - 25.251: 95.3839% ( 28) 00:28:35.525 25.251 - 25.367: 95.6739% ( 25) 00:28:35.525 25.367 - 25.484: 95.9522% ( 24) 00:28:35.526 25.484 - 25.600: 96.2306% ( 24) 00:28:35.526 25.600 - 25.716: 96.4393% ( 18) 00:28:35.526 25.716 - 25.833: 96.6829% ( 21) 00:28:35.526 25.833 - 25.949: 96.8801% ( 17) 00:28:35.526 25.949 - 26.065: 97.0656% ( 16) 00:28:35.526 26.065 - 26.182: 97.1816% ( 10) 00:28:35.526 26.182 - 26.298: 97.2860% ( 9) 00:28:35.526 26.298 - 26.415: 97.3904% ( 9) 00:28:35.526 26.415 - 26.531: 97.4832% ( 8) 00:28:35.526 26.531 - 26.647: 97.5644% ( 7) 00:28:35.526 26.647 - 26.764: 97.6108% ( 4) 00:28:35.526 26.764 - 26.880: 97.6688% ( 5) 00:28:35.526 26.880 - 26.996: 97.7499% ( 7) 00:28:35.526 26.996 - 27.113: 97.7847% ( 3) 00:28:35.526 27.113 - 27.229: 97.8543% ( 6) 00:28:35.526 27.229 - 27.345: 97.8891% ( 3) 00:28:35.526 27.345 - 27.462: 97.9007% ( 1) 00:28:35.526 27.462 - 27.578: 97.9471% ( 4) 00:28:35.526 27.578 - 27.695: 97.9935% ( 4) 00:28:35.526 27.695 - 27.811: 98.0515% ( 5) 00:28:35.526 27.811 - 27.927: 98.0631% ( 1) 00:28:35.526 27.927 - 28.044: 98.0863% ( 2) 00:28:35.526 28.044 - 28.160: 98.0979% ( 1) 00:28:35.526 28.509 - 28.625: 98.1095% ( 1) 00:28:35.526 28.625 - 28.742: 98.1211% ( 1) 00:28:35.526 28.742 - 28.858: 98.1443% ( 2) 00:28:35.526 28.858 - 28.975: 98.1559% ( 1) 00:28:35.526 28.975 - 29.091: 98.1675% ( 1) 00:28:35.526 29.091 - 29.207: 98.1791% ( 1) 00:28:35.526 29.207 - 29.324: 98.2023% ( 2) 00:28:35.526 29.440 - 29.556: 98.2255% ( 2) 00:28:35.526 29.556 - 29.673: 98.2371% ( 1) 00:28:35.526 29.673 - 29.789: 98.2487% ( 1) 00:28:35.526 29.789 - 30.022: 98.3299% ( 7) 00:28:35.526 30.022 - 30.255: 98.4226% ( 8) 00:28:35.526 30.255 - 30.487: 98.5038% ( 7) 00:28:35.526 30.487 - 30.720: 98.6546% ( 13) 00:28:35.526 30.720 - 30.953: 98.7242% ( 6) 00:28:35.526 30.953 - 31.185: 98.7822% ( 5) 00:28:35.526 31.185 - 31.418: 98.8866% ( 9) 00:28:35.526 31.418 - 31.651: 98.9330% ( 4) 00:28:35.526 31.651 - 31.884: 98.9562% ( 2) 00:28:35.526 31.884 - 32.116: 99.0026% ( 4) 00:28:35.526 32.116 - 32.349: 99.0837% ( 7) 00:28:35.526 32.349 - 32.582: 99.1417% ( 5) 00:28:35.526 32.582 - 32.815: 99.1533% ( 1) 00:28:35.526 32.815 - 33.047: 99.1881% ( 3) 00:28:35.526 33.513 - 33.745: 99.2345% ( 4) 00:28:35.526 33.978 - 34.211: 99.2461% ( 1) 00:28:35.526 34.211 - 34.444: 99.2693% ( 2) 00:28:35.526 34.676 - 34.909: 99.2925% ( 2) 00:28:35.526 34.909 - 35.142: 99.3273% ( 3) 00:28:35.526 35.375 - 35.607: 99.3389% ( 1) 00:28:35.526 36.073 - 36.305: 99.3621% ( 2) 00:28:35.526 36.305 - 36.538: 99.3737% ( 1) 00:28:35.526 36.538 - 36.771: 99.3853% ( 1) 00:28:35.526 36.771 - 37.004: 99.3969% ( 1) 00:28:35.526 37.935 - 38.167: 99.4085% ( 1) 00:28:35.526 38.167 - 38.400: 99.4201% ( 1) 00:28:35.526 38.400 - 38.633: 99.4433% ( 2) 00:28:35.526 38.633 - 38.865: 99.4897% ( 4) 00:28:35.526 38.865 - 39.098: 99.5245% ( 3) 00:28:35.526 39.098 - 39.331: 99.5941% ( 6) 00:28:35.526 39.331 - 39.564: 99.6289% ( 3) 00:28:35.526 39.564 - 39.796: 99.6521% ( 2) 00:28:35.526 39.796 - 40.029: 99.6984% ( 4) 00:28:35.526 40.029 - 40.262: 99.7332% ( 3) 00:28:35.526 40.960 - 41.193: 99.7448% ( 1) 00:28:35.526 41.193 - 41.425: 99.7680% ( 2) 00:28:35.526 41.891 - 42.124: 99.7796% ( 1) 00:28:35.526 42.124 - 42.356: 99.7912% ( 1) 00:28:35.526 42.356 - 42.589: 99.8028% ( 1) 00:28:35.526 42.589 - 42.822: 99.8144% ( 1) 00:28:35.526 43.055 - 43.287: 99.8260% ( 1) 00:28:35.526 44.451 - 44.684: 99.8376% ( 1) 00:28:35.526 45.382 - 45.615: 99.8608% ( 2) 00:28:35.526 45.847 - 46.080: 99.8724% ( 1) 00:28:35.526 46.080 - 46.313: 99.8956% ( 2) 00:28:35.526 47.011 - 47.244: 99.9072% ( 1) 00:28:35.526 47.476 - 47.709: 99.9188% ( 1) 00:28:35.526 48.175 - 48.407: 99.9304% ( 1) 00:28:35.526 53.062 - 53.295: 99.9420% ( 1) 00:28:35.526 62.836 - 63.302: 99.9536% ( 1) 00:28:35.526 69.818 - 70.284: 99.9652% ( 1) 00:28:35.526 70.284 - 70.749: 99.9768% ( 1) 00:28:35.526 87.505 - 87.971: 99.9884% ( 1) 00:28:35.526 528.756 - 532.480: 100.0000% ( 1) 00:28:35.526 00:28:35.526 00:28:35.526 real 0m1.271s 00:28:35.526 user 0m1.099s 00:28:35.526 sys 0m0.124s 00:28:35.526 21:50:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:35.526 21:50:55 -- common/autotest_common.sh@10 -- # set +x 00:28:35.526 ************************************ 00:28:35.526 END TEST nvme_overhead 00:28:35.526 ************************************ 00:28:35.526 21:50:55 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:35.526 21:50:55 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:28:35.526 21:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:35.526 21:50:55 -- common/autotest_common.sh@10 -- # set +x 00:28:35.526 ************************************ 00:28:35.526 START TEST nvme_arbitration 00:28:35.526 ************************************ 00:28:35.526 21:50:55 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:28:38.811 Initializing NVMe Controllers 00:28:38.811 Attached to 0000:00:06.0 00:28:38.811 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:28:38.811 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:28:38.811 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:28:38.811 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:28:38.811 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:28:38.811 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:28:38.811 Initialization complete. Launching workers. 00:28:38.811 Starting thread on core 1 with urgent priority queue 00:28:38.811 Starting thread on core 2 with urgent priority queue 00:28:38.811 Starting thread on core 3 with urgent priority queue 00:28:38.811 Starting thread on core 0 with urgent priority queue 00:28:38.811 QEMU NVMe Ctrl (12340 ) core 0: 1301.33 IO/s 76.84 secs/100000 ios 00:28:38.811 QEMU NVMe Ctrl (12340 ) core 1: 1301.33 IO/s 76.84 secs/100000 ios 00:28:38.811 QEMU NVMe Ctrl (12340 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:28:38.811 QEMU NVMe Ctrl (12340 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:28:38.811 ======================================================== 00:28:38.811 00:28:38.811 00:28:38.811 real 0m3.489s 00:28:38.811 user 0m9.515s 00:28:38.811 sys 0m0.141s 00:28:38.811 21:50:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:38.811 21:50:59 -- common/autotest_common.sh@10 -- # set +x 00:28:38.811 ************************************ 00:28:38.811 END TEST nvme_arbitration 00:28:38.811 ************************************ 00:28:38.811 21:50:59 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:38.811 21:50:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:28:38.811 21:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:38.811 21:50:59 -- common/autotest_common.sh@10 -- # set +x 00:28:38.811 ************************************ 00:28:38.811 START TEST nvme_single_aen 00:28:38.811 ************************************ 00:28:38.811 21:50:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:28:39.070 [2024-12-06 21:50:59.337584] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:39.070 [2024-12-06 21:50:59.337702] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.070 [2024-12-06 21:50:59.553569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:39.330 Asynchronous Event Request test 00:28:39.330 Attached to 0000:00:06.0 00:28:39.330 Reset controller to setup AER completions for this process 00:28:39.330 Registering asynchronous event callbacks... 00:28:39.330 Getting orig temperature thresholds of all controllers 00:28:39.330 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:39.330 Setting all controllers temperature threshold low to trigger AER 00:28:39.330 Waiting for all controllers temperature threshold to be set lower 00:28:39.330 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:39.330 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:39.330 Waiting for all controllers to trigger AER and reset threshold 00:28:39.330 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:39.330 Cleaning up... 00:28:39.330 00:28:39.330 real 0m0.299s 00:28:39.330 user 0m0.103s 00:28:39.330 sys 0m0.156s 00:28:39.330 21:50:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:39.330 21:50:59 -- common/autotest_common.sh@10 -- # set +x 00:28:39.330 ************************************ 00:28:39.330 END TEST nvme_single_aen 00:28:39.330 ************************************ 00:28:39.330 21:50:59 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:28:39.330 21:50:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:39.330 21:50:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:39.330 21:50:59 -- common/autotest_common.sh@10 -- # set +x 00:28:39.330 ************************************ 00:28:39.330 START TEST nvme_doorbell_aers 00:28:39.330 ************************************ 00:28:39.330 21:50:59 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:28:39.330 21:50:59 -- nvme/nvme.sh@70 -- # bdfs=() 00:28:39.330 21:50:59 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:28:39.330 21:50:59 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:28:39.330 21:50:59 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:28:39.330 21:50:59 -- common/autotest_common.sh@1508 -- # bdfs=() 00:28:39.330 21:50:59 -- common/autotest_common.sh@1508 -- # local bdfs 00:28:39.330 21:50:59 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:28:39.330 21:50:59 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:39.330 21:50:59 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:28:39.330 21:50:59 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:28:39.330 21:50:59 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:28:39.330 21:50:59 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:28:39.330 21:50:59 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:28:39.590 [2024-12-06 21:50:59.941908] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93126) is not found. Dropping the request. 00:28:49.592 Executing: test_write_invalid_db 00:28:49.592 Waiting for AER completion... 00:28:49.592 Failure: test_write_invalid_db 00:28:49.592 00:28:49.592 Executing: test_invalid_db_write_overflow_sq 00:28:49.592 Waiting for AER completion... 00:28:49.592 Failure: test_invalid_db_write_overflow_sq 00:28:49.592 00:28:49.592 Executing: test_invalid_db_write_overflow_cq 00:28:49.592 Waiting for AER completion... 00:28:49.592 Failure: test_invalid_db_write_overflow_cq 00:28:49.592 00:28:49.592 00:28:49.592 real 0m10.095s 00:28:49.592 user 0m8.682s 00:28:49.592 sys 0m1.359s 00:28:49.592 21:51:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:49.592 ************************************ 00:28:49.592 END TEST nvme_doorbell_aers 00:28:49.592 ************************************ 00:28:49.592 21:51:09 -- common/autotest_common.sh@10 -- # set +x 00:28:49.592 21:51:09 -- nvme/nvme.sh@97 -- # uname 00:28:49.592 21:51:09 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:28:49.592 21:51:09 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:49.592 21:51:09 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:28:49.592 21:51:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:49.592 21:51:09 -- common/autotest_common.sh@10 -- # set +x 00:28:49.592 ************************************ 00:28:49.592 START TEST nvme_multi_aen 00:28:49.592 ************************************ 00:28:49.592 21:51:09 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:28:49.592 [2024-12-06 21:51:09.827070] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:49.592 [2024-12-06 21:51:09.827173] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.592 [2024-12-06 21:51:10.016997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:28:49.592 [2024-12-06 21:51:10.017081] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93126) is not found. Dropping the request. 00:28:49.593 [2024-12-06 21:51:10.017122] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93126) is not found. Dropping the request. 00:28:49.593 [2024-12-06 21:51:10.017143] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93126) is not found. Dropping the request. 00:28:49.593 [2024-12-06 21:51:10.028724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:49.593 Child process pid: 93293 00:28:49.593 [2024-12-06 21:51:10.028979] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.852 [Child] Asynchronous Event Request test 00:28:49.852 [Child] Attached to 0000:00:06.0 00:28:49.852 [Child] Registering asynchronous event callbacks... 00:28:49.852 [Child] Getting orig temperature thresholds of all controllers 00:28:49.852 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:49.852 [Child] Waiting for all controllers to trigger AER and reset threshold 00:28:49.852 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:49.852 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:49.852 [Child] Cleaning up... 00:28:50.111 Asynchronous Event Request test 00:28:50.111 Attached to 0000:00:06.0 00:28:50.111 Reset controller to setup AER completions for this process 00:28:50.111 Registering asynchronous event callbacks... 00:28:50.111 Getting orig temperature thresholds of all controllers 00:28:50.111 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:28:50.111 Setting all controllers temperature threshold low to trigger AER 00:28:50.111 Waiting for all controllers temperature threshold to be set lower 00:28:50.111 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:28:50.111 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:28:50.111 Waiting for all controllers to trigger AER and reset threshold 00:28:50.111 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:28:50.111 Cleaning up... 00:28:50.111 00:28:50.111 real 0m0.591s 00:28:50.111 user 0m0.191s 00:28:50.111 sys 0m0.277s 00:28:50.111 21:51:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:50.111 21:51:10 -- common/autotest_common.sh@10 -- # set +x 00:28:50.111 ************************************ 00:28:50.111 END TEST nvme_multi_aen 00:28:50.111 ************************************ 00:28:50.111 21:51:10 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:50.111 21:51:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:28:50.111 21:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.111 21:51:10 -- common/autotest_common.sh@10 -- # set +x 00:28:50.111 ************************************ 00:28:50.111 START TEST nvme_startup 00:28:50.111 ************************************ 00:28:50.111 21:51:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:28:50.370 Initializing NVMe Controllers 00:28:50.370 Attached to 0000:00:06.0 00:28:50.370 Initialization complete. 00:28:50.370 Time used:219338.188 (us). 00:28:50.370 00:28:50.370 real 0m0.293s 00:28:50.370 user 0m0.103s 00:28:50.370 sys 0m0.149s 00:28:50.370 21:51:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:50.370 21:51:10 -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 ************************************ 00:28:50.370 END TEST nvme_startup 00:28:50.370 ************************************ 00:28:50.370 21:51:10 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:28:50.370 21:51:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:50.370 21:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:50.370 21:51:10 -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 ************************************ 00:28:50.370 START TEST nvme_multi_secondary 00:28:50.370 ************************************ 00:28:50.370 21:51:10 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:28:50.370 21:51:10 -- nvme/nvme.sh@52 -- # pid0=93349 00:28:50.370 21:51:10 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:28:50.370 21:51:10 -- nvme/nvme.sh@54 -- # pid1=93350 00:28:50.370 21:51:10 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:28:50.370 21:51:10 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:53.654 Initializing NVMe Controllers 00:28:53.654 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:53.654 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:28:53.654 Initialization complete. Launching workers. 00:28:53.654 ======================================================== 00:28:53.654 Latency(us) 00:28:53.654 Device Information : IOPS MiB/s Average min max 00:28:53.654 PCIE (0000:00:06.0) NSID 1 from core 2: 14544.00 56.81 1099.08 166.54 9431.63 00:28:53.654 ======================================================== 00:28:53.654 Total : 14544.00 56.81 1099.08 166.54 9431.63 00:28:53.654 00:28:53.654 21:51:14 -- nvme/nvme.sh@56 -- # wait 93349 00:28:53.912 Initializing NVMe Controllers 00:28:53.912 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:53.912 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:53.912 Initialization complete. Launching workers. 00:28:53.912 ======================================================== 00:28:53.912 Latency(us) 00:28:53.912 Device Information : IOPS MiB/s Average min max 00:28:53.912 PCIE (0000:00:06.0) NSID 1 from core 1: 33860.99 132.27 472.13 160.03 1404.75 00:28:53.912 ======================================================== 00:28:53.912 Total : 33860.99 132.27 472.13 160.03 1404.75 00:28:53.912 00:28:56.439 Initializing NVMe Controllers 00:28:56.439 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:56.439 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:56.439 Initialization complete. Launching workers. 00:28:56.439 ======================================================== 00:28:56.439 Latency(us) 00:28:56.439 Device Information : IOPS MiB/s Average min max 00:28:56.439 PCIE (0000:00:06.0) NSID 1 from core 0: 43132.93 168.49 370.58 111.60 2814.51 00:28:56.439 ======================================================== 00:28:56.439 Total : 43132.93 168.49 370.58 111.60 2814.51 00:28:56.439 00:28:56.439 21:51:16 -- nvme/nvme.sh@57 -- # wait 93350 00:28:56.439 21:51:16 -- nvme/nvme.sh@61 -- # pid0=93415 00:28:56.439 21:51:16 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:28:56.439 21:51:16 -- nvme/nvme.sh@63 -- # pid1=93416 00:28:56.439 21:51:16 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:28:56.439 21:51:16 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:28:59.724 Initializing NVMe Controllers 00:28:59.724 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:59.724 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:28:59.724 Initialization complete. Launching workers. 00:28:59.724 ======================================================== 00:28:59.724 Latency(us) 00:28:59.724 Device Information : IOPS MiB/s Average min max 00:28:59.724 PCIE (0000:00:06.0) NSID 1 from core 1: 35285.33 137.83 453.06 154.67 1353.97 00:28:59.724 ======================================================== 00:28:59.724 Total : 35285.33 137.83 453.06 154.67 1353.97 00:28:59.724 00:28:59.724 Initializing NVMe Controllers 00:28:59.724 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:28:59.724 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:28:59.724 Initialization complete. Launching workers. 00:28:59.724 ======================================================== 00:28:59.724 Latency(us) 00:28:59.724 Device Information : IOPS MiB/s Average min max 00:28:59.724 PCIE (0000:00:06.0) NSID 1 from core 0: 34119.99 133.28 468.52 159.32 4003.41 00:28:59.724 ======================================================== 00:28:59.724 Total : 34119.99 133.28 468.52 159.32 4003.41 00:28:59.724 00:29:01.625 Initializing NVMe Controllers 00:29:01.625 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:29:01.625 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:29:01.625 Initialization complete. Launching workers. 00:29:01.625 ======================================================== 00:29:01.625 Latency(us) 00:29:01.625 Device Information : IOPS MiB/s Average min max 00:29:01.625 PCIE (0000:00:06.0) NSID 1 from core 2: 17884.80 69.86 893.65 157.34 9836.53 00:29:01.626 ======================================================== 00:29:01.626 Total : 17884.80 69.86 893.65 157.34 9836.53 00:29:01.626 00:29:01.626 21:51:22 -- nvme/nvme.sh@65 -- # wait 93415 00:29:01.626 21:51:22 -- nvme/nvme.sh@66 -- # wait 93416 00:29:01.626 00:29:01.626 real 0m11.300s 00:29:01.626 user 0m18.643s 00:29:01.626 sys 0m1.014s 00:29:01.626 21:51:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:01.626 21:51:22 -- common/autotest_common.sh@10 -- # set +x 00:29:01.626 ************************************ 00:29:01.626 END TEST nvme_multi_secondary 00:29:01.626 ************************************ 00:29:01.626 21:51:22 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:29:01.626 21:51:22 -- nvme/nvme.sh@102 -- # kill_stub 00:29:01.626 21:51:22 -- common/autotest_common.sh@1075 -- # [[ -e /proc/92742 ]] 00:29:01.626 21:51:22 -- common/autotest_common.sh@1076 -- # kill 92742 00:29:01.626 21:51:22 -- common/autotest_common.sh@1077 -- # wait 92742 00:29:02.562 [2024-12-06 21:51:22.975258] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93292) is not found. Dropping the request. 00:29:02.563 [2024-12-06 21:51:22.975361] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93292) is not found. Dropping the request. 00:29:02.563 [2024-12-06 21:51:22.975402] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93292) is not found. Dropping the request. 00:29:02.563 [2024-12-06 21:51:22.975426] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 93292) is not found. Dropping the request. 00:29:02.822 21:51:23 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:29:02.822 21:51:23 -- common/autotest_common.sh@1083 -- # echo 2 00:29:02.822 21:51:23 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:02.822 21:51:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:02.822 21:51:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:02.822 21:51:23 -- common/autotest_common.sh@10 -- # set +x 00:29:02.822 ************************************ 00:29:02.822 START TEST bdev_nvme_reset_stuck_adm_cmd 00:29:02.822 ************************************ 00:29:02.822 21:51:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:29:03.081 * Looking for test storage... 00:29:03.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:03.081 21:51:23 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:03.081 21:51:23 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:03.081 21:51:23 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:03.081 21:51:23 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:03.081 21:51:23 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:03.081 21:51:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:03.081 21:51:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:03.081 21:51:23 -- scripts/common.sh@335 -- # IFS=.-: 00:29:03.081 21:51:23 -- scripts/common.sh@335 -- # read -ra ver1 00:29:03.081 21:51:23 -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.081 21:51:23 -- scripts/common.sh@336 -- # read -ra ver2 00:29:03.081 21:51:23 -- scripts/common.sh@337 -- # local 'op=<' 00:29:03.081 21:51:23 -- scripts/common.sh@339 -- # ver1_l=2 00:29:03.082 21:51:23 -- scripts/common.sh@340 -- # ver2_l=1 00:29:03.082 21:51:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:03.082 21:51:23 -- scripts/common.sh@343 -- # case "$op" in 00:29:03.082 21:51:23 -- scripts/common.sh@344 -- # : 1 00:29:03.082 21:51:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:03.082 21:51:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.082 21:51:23 -- scripts/common.sh@364 -- # decimal 1 00:29:03.082 21:51:23 -- scripts/common.sh@352 -- # local d=1 00:29:03.082 21:51:23 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.082 21:51:23 -- scripts/common.sh@354 -- # echo 1 00:29:03.082 21:51:23 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:03.082 21:51:23 -- scripts/common.sh@365 -- # decimal 2 00:29:03.082 21:51:23 -- scripts/common.sh@352 -- # local d=2 00:29:03.082 21:51:23 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.082 21:51:23 -- scripts/common.sh@354 -- # echo 2 00:29:03.082 21:51:23 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:03.082 21:51:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:03.082 21:51:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:03.082 21:51:23 -- scripts/common.sh@367 -- # return 0 00:29:03.082 21:51:23 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.082 21:51:23 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.082 --rc genhtml_branch_coverage=1 00:29:03.082 --rc genhtml_function_coverage=1 00:29:03.082 --rc genhtml_legend=1 00:29:03.082 --rc geninfo_all_blocks=1 00:29:03.082 --rc geninfo_unexecuted_blocks=1 00:29:03.082 00:29:03.082 ' 00:29:03.082 21:51:23 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.082 --rc genhtml_branch_coverage=1 00:29:03.082 --rc genhtml_function_coverage=1 00:29:03.082 --rc genhtml_legend=1 00:29:03.082 --rc geninfo_all_blocks=1 00:29:03.082 --rc geninfo_unexecuted_blocks=1 00:29:03.082 00:29:03.082 ' 00:29:03.082 21:51:23 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.082 --rc genhtml_branch_coverage=1 00:29:03.082 --rc genhtml_function_coverage=1 00:29:03.082 --rc genhtml_legend=1 00:29:03.082 --rc geninfo_all_blocks=1 00:29:03.082 --rc geninfo_unexecuted_blocks=1 00:29:03.082 00:29:03.082 ' 00:29:03.082 21:51:23 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:03.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.082 --rc genhtml_branch_coverage=1 00:29:03.082 --rc genhtml_function_coverage=1 00:29:03.082 --rc genhtml_legend=1 00:29:03.082 --rc geninfo_all_blocks=1 00:29:03.082 --rc geninfo_unexecuted_blocks=1 00:29:03.082 00:29:03.082 ' 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:29:03.082 21:51:23 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:03.082 21:51:23 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:03.082 21:51:23 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:03.082 21:51:23 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:03.082 21:51:23 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:03.082 21:51:23 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:03.082 21:51:23 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:03.082 21:51:23 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:03.082 21:51:23 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:03.082 21:51:23 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:03.082 21:51:23 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:03.082 21:51:23 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=93575 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:03.082 21:51:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 93575 00:29:03.082 21:51:23 -- common/autotest_common.sh@829 -- # '[' -z 93575 ']' 00:29:03.082 21:51:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.082 21:51:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:03.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.082 21:51:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.082 21:51:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:03.082 21:51:23 -- common/autotest_common.sh@10 -- # set +x 00:29:03.341 [2024-12-06 21:51:23.583763] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:03.341 [2024-12-06 21:51:23.583926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93575 ] 00:29:03.341 [2024-12-06 21:51:23.770506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:03.601 [2024-12-06 21:51:23.939903] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:03.601 [2024-12-06 21:51:23.940288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.601 [2024-12-06 21:51:23.941022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:03.601 [2024-12-06 21:51:23.941236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:03.601 [2024-12-06 21:51:23.941273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.980 21:51:25 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:04.980 21:51:25 -- common/autotest_common.sh@862 -- # return 0 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:29:04.980 21:51:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.980 21:51:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.980 nvme0n1 00:29:04.980 21:51:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_DMklr.txt 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:29:04.980 21:51:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.980 21:51:25 -- common/autotest_common.sh@10 -- # set +x 00:29:04.980 true 00:29:04.980 21:51:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733521885 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=93606 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:04.980 21:51:25 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:29:06.880 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:06.880 21:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.880 21:51:27 -- common/autotest_common.sh@10 -- # set +x 00:29:06.880 [2024-12-06 21:51:27.371092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:06.880 [2024-12-06 21:51:27.371575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:06.880 [2024-12-06 21:51:27.371664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:29:06.880 [2024-12-06 21:51:27.371704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:06.880 [2024-12-06 21:51:27.373683] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:07.138 21:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.138 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 93606 00:29:07.138 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 93606 00:29:07.138 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 93606 00:29:07.138 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:29:07.138 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:29:07.138 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.138 21:51:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.138 21:51:27 -- common/autotest_common.sh@10 -- # set +x 00:29:07.138 21:51:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_DMklr.txt 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_DMklr.txt 00:29:07.139 21:51:27 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 93575 00:29:07.139 21:51:27 -- common/autotest_common.sh@936 -- # '[' -z 93575 ']' 00:29:07.139 21:51:27 -- common/autotest_common.sh@940 -- # kill -0 93575 00:29:07.139 21:51:27 -- common/autotest_common.sh@941 -- # uname 00:29:07.139 21:51:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:07.139 21:51:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93575 00:29:07.139 21:51:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:07.139 21:51:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:07.139 killing process with pid 93575 00:29:07.139 21:51:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93575' 00:29:07.139 21:51:27 -- common/autotest_common.sh@955 -- # kill 93575 00:29:07.139 21:51:27 -- common/autotest_common.sh@960 -- # wait 93575 00:29:09.058 21:51:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:29:09.058 21:51:29 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:29:09.058 00:29:09.058 real 0m6.168s 00:29:09.058 user 0m21.826s 00:29:09.058 sys 0m0.694s 00:29:09.058 21:51:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:09.058 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:29:09.058 ************************************ 00:29:09.058 END TEST bdev_nvme_reset_stuck_adm_cmd 00:29:09.058 ************************************ 00:29:09.058 21:51:29 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:29:09.058 21:51:29 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:29:09.058 21:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:09.058 21:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:09.058 21:51:29 -- common/autotest_common.sh@10 -- # set +x 00:29:09.058 ************************************ 00:29:09.058 START TEST nvme_fio 00:29:09.058 ************************************ 00:29:09.058 21:51:29 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:29:09.058 21:51:29 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:29:09.058 21:51:29 -- nvme/nvme.sh@32 -- # ran_fio=false 00:29:09.058 21:51:29 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:29:09.058 21:51:29 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:09.058 21:51:29 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:09.058 21:51:29 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:09.058 21:51:29 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:09.058 21:51:29 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:09.058 21:51:29 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:09.058 21:51:29 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:09.058 21:51:29 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0') 00:29:09.058 21:51:29 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:29:09.058 21:51:29 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:29:09.058 21:51:29 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:09.058 21:51:29 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:29:09.316 21:51:29 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:29:09.316 21:51:29 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:29:09.575 21:51:30 -- nvme/nvme.sh@41 -- # bs=4096 00:29:09.575 21:51:30 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:09.575 21:51:30 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:09.575 21:51:30 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:09.575 21:51:30 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:09.575 21:51:30 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:09.575 21:51:30 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:09.575 21:51:30 -- common/autotest_common.sh@1330 -- # shift 00:29:09.575 21:51:30 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:09.575 21:51:30 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:09.575 21:51:30 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:29:09.575 21:51:30 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:09.575 21:51:30 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:09.834 21:51:30 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:29:09.834 21:51:30 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:29:09.834 21:51:30 -- common/autotest_common.sh@1336 -- # break 00:29:09.834 21:51:30 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:29:09.834 21:51:30 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:29:09.834 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:09.834 fio-3.35 00:29:09.834 Starting 1 thread 00:29:13.126 00:29:13.126 test: (groupid=0, jobs=1): err= 0: pid=93742: Fri Dec 6 21:51:33 2024 00:29:13.126 read: IOPS=12.7k, BW=49.7MiB/s (52.1MB/s)(99.5MiB/2001msec) 00:29:13.126 slat (usec): min=4, max=129, avg= 7.79, stdev= 3.96 00:29:13.126 clat (usec): min=304, max=11881, avg=5004.95, stdev=571.67 00:29:13.126 lat (usec): min=310, max=12011, avg=5012.75, stdev=572.36 00:29:13.126 clat percentiles (usec): 00:29:13.126 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:29:13.126 | 30.00th=[ 4686], 40.00th=[ 4883], 50.00th=[ 5014], 60.00th=[ 5145], 00:29:13.126 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5866], 00:29:13.126 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 8979], 99.95th=[10814], 00:29:13.126 | 99.99th=[11863] 00:29:13.126 bw ( KiB/s): min=49277, max=50384, per=98.04%, avg=49913.67, stdev=571.94, samples=3 00:29:13.126 iops : min=12319, max=12596, avg=12478.33, stdev=143.12, samples=3 00:29:13.126 write: IOPS=12.7k, BW=49.6MiB/s (52.0MB/s)(99.3MiB/2001msec); 0 zone resets 00:29:13.126 slat (usec): min=4, max=133, avg= 8.21, stdev= 4.13 00:29:13.126 clat (usec): min=268, max=11779, avg=5021.17, stdev=577.81 00:29:13.126 lat (usec): min=275, max=11803, avg=5029.38, stdev=578.44 00:29:13.126 clat percentiles (usec): 00:29:13.126 | 1.00th=[ 3884], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:29:13.126 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 5014], 60.00th=[ 5145], 00:29:13.126 | 70.00th=[ 5276], 80.00th=[ 5473], 90.00th=[ 5669], 95.00th=[ 5932], 00:29:13.126 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 9503], 99.95th=[10814], 00:29:13.126 | 99.99th=[11731] 00:29:13.126 bw ( KiB/s): min=49528, max=50640, per=98.31%, avg=49942.67, stdev=607.50, samples=3 00:29:13.126 iops : min=12382, max=12660, avg=12485.67, stdev=151.88, samples=3 00:29:13.126 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:13.126 lat (msec) : 2=0.04%, 4=1.31%, 10=98.55%, 20=0.08% 00:29:13.126 cpu : usr=99.70%, sys=0.25%, ctx=5, majf=0, minf=609 00:29:13.126 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:29:13.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:13.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:13.126 issued rwts: total=25468,25412,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:13.126 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:13.126 00:29:13.126 Run status group 0 (all jobs): 00:29:13.126 READ: bw=49.7MiB/s (52.1MB/s), 49.7MiB/s-49.7MiB/s (52.1MB/s-52.1MB/s), io=99.5MiB (104MB), run=2001-2001msec 00:29:13.126 WRITE: bw=49.6MiB/s (52.0MB/s), 49.6MiB/s-49.6MiB/s (52.0MB/s-52.0MB/s), io=99.3MiB (104MB), run=2001-2001msec 00:29:13.126 ----------------------------------------------------- 00:29:13.126 Suppressions used: 00:29:13.126 count bytes template 00:29:13.126 1 32 /usr/src/fio/parse.c 00:29:13.126 ----------------------------------------------------- 00:29:13.126 00:29:13.126 21:51:33 -- nvme/nvme.sh@44 -- # ran_fio=true 00:29:13.126 21:51:33 -- nvme/nvme.sh@46 -- # true 00:29:13.126 00:29:13.126 real 0m3.941s 00:29:13.126 user 0m3.205s 00:29:13.126 sys 0m0.383s 00:29:13.126 21:51:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.126 21:51:33 -- common/autotest_common.sh@10 -- # set +x 00:29:13.126 ************************************ 00:29:13.126 END TEST nvme_fio 00:29:13.126 ************************************ 00:29:13.127 00:29:13.127 real 0m47.878s 00:29:13.127 user 2m8.608s 00:29:13.127 sys 0m8.185s 00:29:13.127 21:51:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:13.127 21:51:33 -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 ************************************ 00:29:13.127 END TEST nvme 00:29:13.127 ************************************ 00:29:13.127 21:51:33 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:29:13.127 21:51:33 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:13.127 21:51:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:13.127 21:51:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:13.127 21:51:33 -- common/autotest_common.sh@10 -- # set +x 00:29:13.127 ************************************ 00:29:13.127 START TEST nvme_scc 00:29:13.127 ************************************ 00:29:13.127 21:51:33 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:29:13.127 * Looking for test storage... 00:29:13.127 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:13.127 21:51:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:13.127 21:51:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:13.127 21:51:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:13.387 21:51:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:13.387 21:51:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:13.387 21:51:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:13.387 21:51:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:13.387 21:51:33 -- scripts/common.sh@335 -- # IFS=.-: 00:29:13.387 21:51:33 -- scripts/common.sh@335 -- # read -ra ver1 00:29:13.387 21:51:33 -- scripts/common.sh@336 -- # IFS=.-: 00:29:13.387 21:51:33 -- scripts/common.sh@336 -- # read -ra ver2 00:29:13.387 21:51:33 -- scripts/common.sh@337 -- # local 'op=<' 00:29:13.387 21:51:33 -- scripts/common.sh@339 -- # ver1_l=2 00:29:13.387 21:51:33 -- scripts/common.sh@340 -- # ver2_l=1 00:29:13.387 21:51:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:13.387 21:51:33 -- scripts/common.sh@343 -- # case "$op" in 00:29:13.387 21:51:33 -- scripts/common.sh@344 -- # : 1 00:29:13.387 21:51:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:13.387 21:51:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:13.387 21:51:33 -- scripts/common.sh@364 -- # decimal 1 00:29:13.387 21:51:33 -- scripts/common.sh@352 -- # local d=1 00:29:13.387 21:51:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:13.387 21:51:33 -- scripts/common.sh@354 -- # echo 1 00:29:13.387 21:51:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:13.387 21:51:33 -- scripts/common.sh@365 -- # decimal 2 00:29:13.387 21:51:33 -- scripts/common.sh@352 -- # local d=2 00:29:13.387 21:51:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:13.387 21:51:33 -- scripts/common.sh@354 -- # echo 2 00:29:13.387 21:51:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:13.387 21:51:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:13.387 21:51:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:13.387 21:51:33 -- scripts/common.sh@367 -- # return 0 00:29:13.387 21:51:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:13.387 21:51:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:13.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.387 --rc genhtml_branch_coverage=1 00:29:13.387 --rc genhtml_function_coverage=1 00:29:13.387 --rc genhtml_legend=1 00:29:13.387 --rc geninfo_all_blocks=1 00:29:13.387 --rc geninfo_unexecuted_blocks=1 00:29:13.387 00:29:13.387 ' 00:29:13.387 21:51:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:13.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.387 --rc genhtml_branch_coverage=1 00:29:13.387 --rc genhtml_function_coverage=1 00:29:13.387 --rc genhtml_legend=1 00:29:13.387 --rc geninfo_all_blocks=1 00:29:13.387 --rc geninfo_unexecuted_blocks=1 00:29:13.387 00:29:13.387 ' 00:29:13.387 21:51:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:13.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.387 --rc genhtml_branch_coverage=1 00:29:13.387 --rc genhtml_function_coverage=1 00:29:13.387 --rc genhtml_legend=1 00:29:13.387 --rc geninfo_all_blocks=1 00:29:13.387 --rc geninfo_unexecuted_blocks=1 00:29:13.387 00:29:13.387 ' 00:29:13.387 21:51:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:13.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:13.387 --rc genhtml_branch_coverage=1 00:29:13.387 --rc genhtml_function_coverage=1 00:29:13.387 --rc genhtml_legend=1 00:29:13.387 --rc geninfo_all_blocks=1 00:29:13.387 --rc geninfo_unexecuted_blocks=1 00:29:13.387 00:29:13.387 ' 00:29:13.387 21:51:33 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:13.387 21:51:33 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:29:13.387 21:51:33 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:29:13.387 21:51:33 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:13.387 21:51:33 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:13.387 21:51:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:13.387 21:51:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:13.387 21:51:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:13.387 21:51:33 -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.387 21:51:33 -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.387 21:51:33 -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.387 21:51:33 -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.387 21:51:33 -- paths/export.sh@6 -- # export PATH 00:29:13.387 21:51:33 -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:29:13.387 21:51:33 -- nvme/functions.sh@10 -- # ctrls=() 00:29:13.387 21:51:33 -- nvme/functions.sh@10 -- # declare -A ctrls 00:29:13.387 21:51:33 -- nvme/functions.sh@11 -- # nvmes=() 00:29:13.387 21:51:33 -- nvme/functions.sh@11 -- # declare -A nvmes 00:29:13.387 21:51:33 -- nvme/functions.sh@12 -- # bdfs=() 00:29:13.387 21:51:33 -- nvme/functions.sh@12 -- # declare -A bdfs 00:29:13.387 21:51:33 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:29:13.387 21:51:33 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:29:13.387 21:51:33 -- nvme/functions.sh@14 -- # nvme_name= 00:29:13.387 21:51:33 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:13.387 21:51:33 -- nvme/nvme_scc.sh@12 -- # uname 00:29:13.387 21:51:33 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:29:13.387 21:51:33 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:29:13.387 21:51:33 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:13.647 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:13.647 Waiting for block devices as requested 00:29:13.647 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:13.907 21:51:34 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:29:13.907 21:51:34 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:29:13.907 21:51:34 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:29:13.907 21:51:34 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:29:13.907 21:51:34 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:29:13.907 21:51:34 -- scripts/common.sh@15 -- # local i 00:29:13.907 21:51:34 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:29:13.907 21:51:34 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:29:13.907 21:51:34 -- scripts/common.sh@24 -- # return 0 00:29:13.907 21:51:34 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:29:13.907 21:51:34 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:29:13.907 21:51:34 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@18 -- # shift 00:29:13.907 21:51:34 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.907 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.907 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.907 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:29:13.908 21:51:34 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:29:13.908 21:51:34 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:29:13.908 21:51:34 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:29:13.908 21:51:34 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@18 -- # shift 00:29:13.908 21:51:34 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.908 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:29:13.908 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.908 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:29:13.909 21:51:34 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # IFS=: 00:29:13.909 21:51:34 -- nvme/functions.sh@21 -- # read -r reg val 00:29:13.909 21:51:34 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:29:13.909 21:51:34 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:29:13.909 21:51:34 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:29:13.909 21:51:34 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:29:13.909 21:51:34 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:29:13.909 21:51:34 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:29:13.909 21:51:34 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:29:13.909 21:51:34 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:29:13.909 21:51:34 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:29:13.909 21:51:34 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:29:13.909 21:51:34 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:29:13.909 21:51:34 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:29:13.909 21:51:34 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:29:13.909 21:51:34 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:29:13.909 21:51:34 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:29:13.909 21:51:34 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:29:13.909 21:51:34 -- nvme/functions.sh@76 -- # echo 0x15d 00:29:13.909 21:51:34 -- nvme/functions.sh@184 -- # oncs=0x15d 00:29:13.909 21:51:34 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:29:13.909 21:51:34 -- nvme/functions.sh@197 -- # echo nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:29:13.909 21:51:34 -- nvme/functions.sh@206 -- # echo nvme0 00:29:13.909 21:51:34 -- nvme/functions.sh@207 -- # return 0 00:29:13.909 21:51:34 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:29:13.909 21:51:34 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:29:13.909 21:51:34 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:14.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:29:14.475 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:15.045 21:51:35 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:15.045 21:51:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:29:15.045 21:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.045 21:51:35 -- common/autotest_common.sh@10 -- # set +x 00:29:15.045 ************************************ 00:29:15.045 START TEST nvme_simple_copy 00:29:15.045 ************************************ 00:29:15.045 21:51:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:29:15.302 Initializing NVMe Controllers 00:29:15.302 Attaching to 0000:00:06.0 00:29:15.302 Controller supports SCC. Attached to 0000:00:06.0 00:29:15.302 Namespace ID: 1 size: 5GB 00:29:15.302 Initialization complete. 00:29:15.302 00:29:15.302 Controller QEMU NVMe Ctrl (12340 ) 00:29:15.302 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:29:15.302 Namespace Block Size:4096 00:29:15.302 Writing LBAs 0 to 63 with Random Data 00:29:15.303 Copied LBAs from 0 - 63 to the Destination LBA 256 00:29:15.303 LBAs matching Written Data: 64 00:29:15.303 00:29:15.303 real 0m0.300s 00:29:15.303 user 0m0.128s 00:29:15.303 sys 0m0.073s 00:29:15.303 21:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.303 ************************************ 00:29:15.303 END TEST nvme_simple_copy 00:29:15.303 ************************************ 00:29:15.303 21:51:35 -- common/autotest_common.sh@10 -- # set +x 00:29:15.303 00:29:15.303 real 0m2.199s 00:29:15.303 user 0m0.701s 00:29:15.303 sys 0m1.428s 00:29:15.303 21:51:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:15.303 21:51:35 -- common/autotest_common.sh@10 -- # set +x 00:29:15.303 ************************************ 00:29:15.303 END TEST nvme_scc 00:29:15.303 ************************************ 00:29:15.303 21:51:35 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:29:15.303 21:51:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:29:15.303 21:51:35 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:29:15.303 21:51:35 -- spdk/autotest.sh@225 -- # [[ 0 -eq 1 ]] 00:29:15.303 21:51:35 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:29:15.303 21:51:35 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:15.303 21:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:15.303 21:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:15.303 21:51:35 -- common/autotest_common.sh@10 -- # set +x 00:29:15.303 ************************************ 00:29:15.303 START TEST nvme_rpc 00:29:15.303 ************************************ 00:29:15.303 21:51:35 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:29:15.561 * Looking for test storage... 00:29:15.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:15.561 21:51:35 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:15.561 21:51:35 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:15.561 21:51:35 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:15.561 21:51:35 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:15.561 21:51:35 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:15.561 21:51:35 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:15.561 21:51:35 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:15.561 21:51:35 -- scripts/common.sh@335 -- # IFS=.-: 00:29:15.561 21:51:35 -- scripts/common.sh@335 -- # read -ra ver1 00:29:15.561 21:51:35 -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.561 21:51:35 -- scripts/common.sh@336 -- # read -ra ver2 00:29:15.561 21:51:35 -- scripts/common.sh@337 -- # local 'op=<' 00:29:15.561 21:51:35 -- scripts/common.sh@339 -- # ver1_l=2 00:29:15.561 21:51:35 -- scripts/common.sh@340 -- # ver2_l=1 00:29:15.561 21:51:35 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:15.561 21:51:35 -- scripts/common.sh@343 -- # case "$op" in 00:29:15.561 21:51:35 -- scripts/common.sh@344 -- # : 1 00:29:15.561 21:51:35 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:15.561 21:51:35 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.561 21:51:35 -- scripts/common.sh@364 -- # decimal 1 00:29:15.561 21:51:35 -- scripts/common.sh@352 -- # local d=1 00:29:15.561 21:51:35 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.561 21:51:35 -- scripts/common.sh@354 -- # echo 1 00:29:15.561 21:51:35 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:15.561 21:51:35 -- scripts/common.sh@365 -- # decimal 2 00:29:15.561 21:51:35 -- scripts/common.sh@352 -- # local d=2 00:29:15.561 21:51:35 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.561 21:51:35 -- scripts/common.sh@354 -- # echo 2 00:29:15.561 21:51:35 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:15.561 21:51:35 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:15.561 21:51:35 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:15.561 21:51:35 -- scripts/common.sh@367 -- # return 0 00:29:15.561 21:51:35 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.561 21:51:35 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:15.561 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.562 --rc genhtml_branch_coverage=1 00:29:15.562 --rc genhtml_function_coverage=1 00:29:15.562 --rc genhtml_legend=1 00:29:15.562 --rc geninfo_all_blocks=1 00:29:15.562 --rc geninfo_unexecuted_blocks=1 00:29:15.562 00:29:15.562 ' 00:29:15.562 21:51:35 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:15.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.562 --rc genhtml_branch_coverage=1 00:29:15.562 --rc genhtml_function_coverage=1 00:29:15.562 --rc genhtml_legend=1 00:29:15.562 --rc geninfo_all_blocks=1 00:29:15.562 --rc geninfo_unexecuted_blocks=1 00:29:15.562 00:29:15.562 ' 00:29:15.562 21:51:35 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:15.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.562 --rc genhtml_branch_coverage=1 00:29:15.562 --rc genhtml_function_coverage=1 00:29:15.562 --rc genhtml_legend=1 00:29:15.562 --rc geninfo_all_blocks=1 00:29:15.562 --rc geninfo_unexecuted_blocks=1 00:29:15.562 00:29:15.562 ' 00:29:15.562 21:51:35 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:15.562 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.562 --rc genhtml_branch_coverage=1 00:29:15.562 --rc genhtml_function_coverage=1 00:29:15.562 --rc genhtml_legend=1 00:29:15.562 --rc geninfo_all_blocks=1 00:29:15.562 --rc geninfo_unexecuted_blocks=1 00:29:15.562 00:29:15.562 ' 00:29:15.562 21:51:35 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:15.562 21:51:35 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:29:15.562 21:51:35 -- common/autotest_common.sh@1519 -- # bdfs=() 00:29:15.562 21:51:35 -- common/autotest_common.sh@1519 -- # local bdfs 00:29:15.562 21:51:35 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:29:15.562 21:51:35 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:29:15.562 21:51:35 -- common/autotest_common.sh@1508 -- # bdfs=() 00:29:15.562 21:51:35 -- common/autotest_common.sh@1508 -- # local bdfs 00:29:15.562 21:51:35 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:15.562 21:51:35 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:15.562 21:51:35 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:29:15.562 21:51:36 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:29:15.562 21:51:36 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 00:29:15.562 21:51:36 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:29:15.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.562 21:51:36 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:29:15.562 21:51:36 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=94192 00:29:15.562 21:51:36 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:15.562 21:51:36 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:29:15.562 21:51:36 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 94192 00:29:15.562 21:51:36 -- common/autotest_common.sh@829 -- # '[' -z 94192 ']' 00:29:15.562 21:51:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.562 21:51:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:15.562 21:51:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.562 21:51:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:15.562 21:51:36 -- common/autotest_common.sh@10 -- # set +x 00:29:15.819 [2024-12-06 21:51:36.087482] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:15.820 [2024-12-06 21:51:36.087870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94192 ] 00:29:15.820 [2024-12-06 21:51:36.263919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:16.078 [2024-12-06 21:51:36.501558] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:16.078 [2024-12-06 21:51:36.502243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:16.078 [2024-12-06 21:51:36.502267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.453 21:51:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.453 21:51:37 -- common/autotest_common.sh@862 -- # return 0 00:29:17.453 21:51:37 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:29:17.711 Nvme0n1 00:29:17.711 21:51:38 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:29:17.711 21:51:38 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:29:17.971 request: 00:29:17.971 { 00:29:17.971 "filename": "non_existing_file", 00:29:17.971 "bdev_name": "Nvme0n1", 00:29:17.971 "method": "bdev_nvme_apply_firmware", 00:29:17.971 "req_id": 1 00:29:17.971 } 00:29:17.971 Got JSON-RPC error response 00:29:17.971 response: 00:29:17.971 { 00:29:17.971 "code": -32603, 00:29:17.971 "message": "open file failed." 00:29:17.971 } 00:29:17.971 21:51:38 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:29:17.971 21:51:38 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:29:17.971 21:51:38 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:18.229 21:51:38 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:29:18.229 21:51:38 -- nvme/nvme_rpc.sh@40 -- # killprocess 94192 00:29:18.229 21:51:38 -- common/autotest_common.sh@936 -- # '[' -z 94192 ']' 00:29:18.229 21:51:38 -- common/autotest_common.sh@940 -- # kill -0 94192 00:29:18.230 21:51:38 -- common/autotest_common.sh@941 -- # uname 00:29:18.230 21:51:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:18.230 21:51:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94192 00:29:18.230 killing process with pid 94192 00:29:18.230 21:51:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:18.230 21:51:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:18.230 21:51:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94192' 00:29:18.230 21:51:38 -- common/autotest_common.sh@955 -- # kill 94192 00:29:18.230 21:51:38 -- common/autotest_common.sh@960 -- # wait 94192 00:29:20.135 ************************************ 00:29:20.135 END TEST nvme_rpc 00:29:20.135 ************************************ 00:29:20.135 00:29:20.135 real 0m4.654s 00:29:20.135 user 0m8.898s 00:29:20.135 sys 0m0.692s 00:29:20.135 21:51:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:20.135 21:51:40 -- common/autotest_common.sh@10 -- # set +x 00:29:20.135 21:51:40 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:20.135 21:51:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:20.135 21:51:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:20.135 21:51:40 -- common/autotest_common.sh@10 -- # set +x 00:29:20.135 ************************************ 00:29:20.135 START TEST nvme_rpc_timeouts 00:29:20.135 ************************************ 00:29:20.135 21:51:40 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:29:20.135 * Looking for test storage... 00:29:20.135 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:29:20.135 21:51:40 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:20.135 21:51:40 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:20.135 21:51:40 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:20.135 21:51:40 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:20.135 21:51:40 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:20.135 21:51:40 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:20.135 21:51:40 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:20.135 21:51:40 -- scripts/common.sh@335 -- # IFS=.-: 00:29:20.135 21:51:40 -- scripts/common.sh@335 -- # read -ra ver1 00:29:20.135 21:51:40 -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.135 21:51:40 -- scripts/common.sh@336 -- # read -ra ver2 00:29:20.135 21:51:40 -- scripts/common.sh@337 -- # local 'op=<' 00:29:20.135 21:51:40 -- scripts/common.sh@339 -- # ver1_l=2 00:29:20.135 21:51:40 -- scripts/common.sh@340 -- # ver2_l=1 00:29:20.135 21:51:40 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:20.135 21:51:40 -- scripts/common.sh@343 -- # case "$op" in 00:29:20.135 21:51:40 -- scripts/common.sh@344 -- # : 1 00:29:20.135 21:51:40 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:20.135 21:51:40 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.394 21:51:40 -- scripts/common.sh@364 -- # decimal 1 00:29:20.394 21:51:40 -- scripts/common.sh@352 -- # local d=1 00:29:20.394 21:51:40 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.394 21:51:40 -- scripts/common.sh@354 -- # echo 1 00:29:20.394 21:51:40 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:20.395 21:51:40 -- scripts/common.sh@365 -- # decimal 2 00:29:20.395 21:51:40 -- scripts/common.sh@352 -- # local d=2 00:29:20.395 21:51:40 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.395 21:51:40 -- scripts/common.sh@354 -- # echo 2 00:29:20.395 21:51:40 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:20.395 21:51:40 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:20.395 21:51:40 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:20.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.395 21:51:40 -- scripts/common.sh@367 -- # return 0 00:29:20.395 21:51:40 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.395 21:51:40 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:20.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.395 --rc genhtml_branch_coverage=1 00:29:20.395 --rc genhtml_function_coverage=1 00:29:20.395 --rc genhtml_legend=1 00:29:20.395 --rc geninfo_all_blocks=1 00:29:20.395 --rc geninfo_unexecuted_blocks=1 00:29:20.395 00:29:20.395 ' 00:29:20.395 21:51:40 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:20.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.395 --rc genhtml_branch_coverage=1 00:29:20.395 --rc genhtml_function_coverage=1 00:29:20.395 --rc genhtml_legend=1 00:29:20.395 --rc geninfo_all_blocks=1 00:29:20.395 --rc geninfo_unexecuted_blocks=1 00:29:20.395 00:29:20.395 ' 00:29:20.395 21:51:40 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:20.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.395 --rc genhtml_branch_coverage=1 00:29:20.395 --rc genhtml_function_coverage=1 00:29:20.395 --rc genhtml_legend=1 00:29:20.395 --rc geninfo_all_blocks=1 00:29:20.395 --rc geninfo_unexecuted_blocks=1 00:29:20.395 00:29:20.395 ' 00:29:20.395 21:51:40 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:20.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.395 --rc genhtml_branch_coverage=1 00:29:20.395 --rc genhtml_function_coverage=1 00:29:20.395 --rc genhtml_legend=1 00:29:20.395 --rc geninfo_all_blocks=1 00:29:20.395 --rc geninfo_unexecuted_blocks=1 00:29:20.395 00:29:20.395 ' 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_94270 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_94270 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=94301 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 94301 00:29:20.395 21:51:40 -- common/autotest_common.sh@829 -- # '[' -z 94301 ']' 00:29:20.395 21:51:40 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:29:20.395 21:51:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.395 21:51:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.395 21:51:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.395 21:51:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.395 21:51:40 -- common/autotest_common.sh@10 -- # set +x 00:29:20.395 [2024-12-06 21:51:40.702468] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:20.395 [2024-12-06 21:51:40.702835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94301 ] 00:29:20.395 [2024-12-06 21:51:40.860532] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:20.655 [2024-12-06 21:51:41.025915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.655 [2024-12-06 21:51:41.026520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.655 [2024-12-06 21:51:41.026540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.224 21:51:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.224 21:51:41 -- common/autotest_common.sh@862 -- # return 0 00:29:21.224 21:51:41 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:29:21.224 Checking default timeout settings: 00:29:21.224 21:51:41 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:21.793 21:51:42 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:29:21.793 Making settings changes with rpc: 00:29:21.793 21:51:42 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:29:21.793 Check default vs. modified settings: 00:29:21.793 21:51:42 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:29:21.793 21:51:42 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 Setting action_on_timeout is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 Setting timeout_us is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:29:22.361 Setting timeout_admin_us is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_94270 /tmp/settings_modified_94270 00:29:22.361 21:51:42 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 94301 00:29:22.361 21:51:42 -- common/autotest_common.sh@936 -- # '[' -z 94301 ']' 00:29:22.361 21:51:42 -- common/autotest_common.sh@940 -- # kill -0 94301 00:29:22.361 21:51:42 -- common/autotest_common.sh@941 -- # uname 00:29:22.361 21:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:22.361 21:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94301 00:29:22.361 killing process with pid 94301 00:29:22.361 21:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:22.361 21:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:22.361 21:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94301' 00:29:22.361 21:51:42 -- common/autotest_common.sh@955 -- # kill 94301 00:29:22.361 21:51:42 -- common/autotest_common.sh@960 -- # wait 94301 00:29:24.268 RPC TIMEOUT SETTING TEST PASSED. 00:29:24.268 21:51:44 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:29:24.268 ************************************ 00:29:24.268 END TEST nvme_rpc_timeouts 00:29:24.268 ************************************ 00:29:24.268 00:29:24.268 real 0m4.075s 00:29:24.268 user 0m7.806s 00:29:24.268 sys 0m0.621s 00:29:24.268 21:51:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:24.268 21:51:44 -- common/autotest_common.sh@10 -- # set +x 00:29:24.268 21:51:44 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@242 -- # [[ 0 -eq 1 ]] 00:29:24.268 21:51:44 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@255 -- # timing_exit lib 00:29:24.268 21:51:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.268 21:51:44 -- common/autotest_common.sh@10 -- # set +x 00:29:24.268 21:51:44 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:24.268 21:51:44 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:29:24.268 21:51:44 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:29:24.268 21:51:44 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:24.268 21:51:44 -- spdk/autotest.sh@365 -- # [[ 1 -eq 1 ]] 00:29:24.268 21:51:44 -- spdk/autotest.sh@366 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:24.268 21:51:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:24.268 21:51:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:24.268 21:51:44 -- common/autotest_common.sh@10 -- # set +x 00:29:24.268 ************************************ 00:29:24.268 START TEST blockdev_raid5f 00:29:24.268 ************************************ 00:29:24.268 21:51:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:29:24.268 * Looking for test storage... 00:29:24.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:24.268 21:51:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:24.268 21:51:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:24.268 21:51:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:24.528 21:51:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:24.528 21:51:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:24.528 21:51:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:24.528 21:51:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:24.528 21:51:44 -- scripts/common.sh@335 -- # IFS=.-: 00:29:24.528 21:51:44 -- scripts/common.sh@335 -- # read -ra ver1 00:29:24.528 21:51:44 -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.528 21:51:44 -- scripts/common.sh@336 -- # read -ra ver2 00:29:24.528 21:51:44 -- scripts/common.sh@337 -- # local 'op=<' 00:29:24.528 21:51:44 -- scripts/common.sh@339 -- # ver1_l=2 00:29:24.528 21:51:44 -- scripts/common.sh@340 -- # ver2_l=1 00:29:24.528 21:51:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:24.528 21:51:44 -- scripts/common.sh@343 -- # case "$op" in 00:29:24.528 21:51:44 -- scripts/common.sh@344 -- # : 1 00:29:24.528 21:51:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:24.528 21:51:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.528 21:51:44 -- scripts/common.sh@364 -- # decimal 1 00:29:24.528 21:51:44 -- scripts/common.sh@352 -- # local d=1 00:29:24.528 21:51:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.528 21:51:44 -- scripts/common.sh@354 -- # echo 1 00:29:24.528 21:51:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:24.528 21:51:44 -- scripts/common.sh@365 -- # decimal 2 00:29:24.528 21:51:44 -- scripts/common.sh@352 -- # local d=2 00:29:24.528 21:51:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.528 21:51:44 -- scripts/common.sh@354 -- # echo 2 00:29:24.528 21:51:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:24.528 21:51:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:24.528 21:51:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:24.528 21:51:44 -- scripts/common.sh@367 -- # return 0 00:29:24.528 21:51:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.528 21:51:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:24.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.528 --rc genhtml_branch_coverage=1 00:29:24.528 --rc genhtml_function_coverage=1 00:29:24.528 --rc genhtml_legend=1 00:29:24.528 --rc geninfo_all_blocks=1 00:29:24.528 --rc geninfo_unexecuted_blocks=1 00:29:24.528 00:29:24.528 ' 00:29:24.528 21:51:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:24.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.528 --rc genhtml_branch_coverage=1 00:29:24.528 --rc genhtml_function_coverage=1 00:29:24.528 --rc genhtml_legend=1 00:29:24.528 --rc geninfo_all_blocks=1 00:29:24.528 --rc geninfo_unexecuted_blocks=1 00:29:24.528 00:29:24.528 ' 00:29:24.528 21:51:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:24.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.528 --rc genhtml_branch_coverage=1 00:29:24.528 --rc genhtml_function_coverage=1 00:29:24.528 --rc genhtml_legend=1 00:29:24.528 --rc geninfo_all_blocks=1 00:29:24.528 --rc geninfo_unexecuted_blocks=1 00:29:24.528 00:29:24.528 ' 00:29:24.528 21:51:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:24.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.528 --rc genhtml_branch_coverage=1 00:29:24.528 --rc genhtml_function_coverage=1 00:29:24.528 --rc genhtml_legend=1 00:29:24.528 --rc geninfo_all_blocks=1 00:29:24.528 --rc geninfo_unexecuted_blocks=1 00:29:24.528 00:29:24.528 ' 00:29:24.528 21:51:44 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:24.528 21:51:44 -- bdev/nbd_common.sh@6 -- # set -e 00:29:24.528 21:51:44 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:24.528 21:51:44 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:24.528 21:51:44 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:24.528 21:51:44 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:24.528 21:51:44 -- bdev/blockdev.sh@18 -- # : 00:29:24.528 21:51:44 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:24.528 21:51:44 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:24.528 21:51:44 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:24.528 21:51:44 -- bdev/blockdev.sh@672 -- # uname -s 00:29:24.528 21:51:44 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:24.528 21:51:44 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:24.528 21:51:44 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:29:24.528 21:51:44 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:24.528 21:51:44 -- bdev/blockdev.sh@682 -- # dek= 00:29:24.528 21:51:44 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:24.528 21:51:44 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:24.528 21:51:44 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:24.528 21:51:44 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:29:24.528 21:51:44 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:29:24.528 21:51:44 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:24.528 21:51:44 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=94445 00:29:24.528 21:51:44 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:24.528 21:51:44 -- bdev/blockdev.sh@47 -- # waitforlisten 94445 00:29:24.528 21:51:44 -- common/autotest_common.sh@829 -- # '[' -z 94445 ']' 00:29:24.528 21:51:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.528 21:51:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.528 21:51:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.528 21:51:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.528 21:51:44 -- common/autotest_common.sh@10 -- # set +x 00:29:24.528 21:51:44 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:24.529 [2024-12-06 21:51:44.912512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:24.529 [2024-12-06 21:51:44.913199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94445 ] 00:29:24.787 [2024-12-06 21:51:45.079113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.787 [2024-12-06 21:51:45.252617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:24.787 [2024-12-06 21:51:45.252864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:26.171 21:51:46 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.171 21:51:46 -- common/autotest_common.sh@862 -- # return 0 00:29:26.171 21:51:46 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:26.171 21:51:46 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:29:26.171 21:51:46 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:29:26.171 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.171 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.171 Malloc0 00:29:26.171 Malloc1 00:29:26.171 Malloc2 00:29:26.171 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.171 21:51:46 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:26.171 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.171 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.171 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.171 21:51:46 -- bdev/blockdev.sh@738 -- # cat 00:29:26.171 21:51:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:26.171 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.171 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.171 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.171 21:51:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:26.171 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.171 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.171 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.171 21:51:46 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:26.171 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.171 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.449 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.449 21:51:46 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:26.449 21:51:46 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:26.449 21:51:46 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:26.449 21:51:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.449 21:51:46 -- common/autotest_common.sh@10 -- # set +x 00:29:26.449 21:51:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.449 21:51:46 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:26.450 21:51:46 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "17a451a1-8e98-4f99-96df-5ba5ae88156f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "17a451a1-8e98-4f99-96df-5ba5ae88156f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "17a451a1-8e98-4f99-96df-5ba5ae88156f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "02dc321a-b1c8-42f1-8371-371772087f09",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b1be1725-510f-4b3d-a0f9-8724da60e772",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "505b48b1-3b28-42f2-b6b7-92ed77d33af9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:26.450 21:51:46 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:26.450 21:51:46 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:26.450 21:51:46 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:29:26.450 21:51:46 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:26.450 21:51:46 -- bdev/blockdev.sh@752 -- # killprocess 94445 00:29:26.450 21:51:46 -- common/autotest_common.sh@936 -- # '[' -z 94445 ']' 00:29:26.450 21:51:46 -- common/autotest_common.sh@940 -- # kill -0 94445 00:29:26.450 21:51:46 -- common/autotest_common.sh@941 -- # uname 00:29:26.450 21:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:26.450 21:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94445 00:29:26.450 killing process with pid 94445 00:29:26.450 21:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:26.450 21:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:26.450 21:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94445' 00:29:26.450 21:51:46 -- common/autotest_common.sh@955 -- # kill 94445 00:29:26.450 21:51:46 -- common/autotest_common.sh@960 -- # wait 94445 00:29:28.359 21:51:48 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:28.359 21:51:48 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:28.359 21:51:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:29:28.359 21:51:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:28.359 21:51:48 -- common/autotest_common.sh@10 -- # set +x 00:29:28.359 ************************************ 00:29:28.359 START TEST bdev_hello_world 00:29:28.359 ************************************ 00:29:28.359 21:51:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:29:28.617 [2024-12-06 21:51:48.893507] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:28.617 [2024-12-06 21:51:48.893670] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94509 ] 00:29:28.617 [2024-12-06 21:51:49.063871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.876 [2024-12-06 21:51:49.228368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.444 [2024-12-06 21:51:49.647900] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:29.444 [2024-12-06 21:51:49.647960] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:29:29.444 [2024-12-06 21:51:49.648005] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:29.444 [2024-12-06 21:51:49.648572] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:29.444 [2024-12-06 21:51:49.648763] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:29.444 [2024-12-06 21:51:49.648814] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:29.444 [2024-12-06 21:51:49.648886] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:29.444 00:29:29.444 [2024-12-06 21:51:49.648913] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:30.381 ************************************ 00:29:30.381 END TEST bdev_hello_world 00:29:30.381 ************************************ 00:29:30.381 00:29:30.381 real 0m2.021s 00:29:30.381 user 0m1.677s 00:29:30.381 sys 0m0.230s 00:29:30.381 21:51:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:30.381 21:51:50 -- common/autotest_common.sh@10 -- # set +x 00:29:30.640 21:51:50 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:30.640 21:51:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:30.640 21:51:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:30.640 21:51:50 -- common/autotest_common.sh@10 -- # set +x 00:29:30.640 ************************************ 00:29:30.640 START TEST bdev_bounds 00:29:30.640 ************************************ 00:29:30.640 21:51:50 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:29:30.640 21:51:50 -- bdev/blockdev.sh@288 -- # bdevio_pid=94551 00:29:30.640 21:51:50 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:30.640 21:51:50 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:30.640 Process bdevio pid: 94551 00:29:30.640 21:51:50 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 94551' 00:29:30.640 21:51:50 -- bdev/blockdev.sh@291 -- # waitforlisten 94551 00:29:30.640 21:51:50 -- common/autotest_common.sh@829 -- # '[' -z 94551 ']' 00:29:30.640 21:51:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.640 21:51:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.640 21:51:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.640 21:51:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.640 21:51:50 -- common/autotest_common.sh@10 -- # set +x 00:29:30.640 [2024-12-06 21:51:50.955662] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:30.640 [2024-12-06 21:51:50.955805] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94551 ] 00:29:30.640 [2024-12-06 21:51:51.109909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:30.899 [2024-12-06 21:51:51.283880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.899 [2024-12-06 21:51:51.283987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.899 [2024-12-06 21:51:51.284004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:31.467 21:51:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.468 21:51:51 -- common/autotest_common.sh@862 -- # return 0 00:29:31.468 21:51:51 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:31.727 I/O targets: 00:29:31.727 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:29:31.727 00:29:31.727 00:29:31.727 CUnit - A unit testing framework for C - Version 2.1-3 00:29:31.727 http://cunit.sourceforge.net/ 00:29:31.727 00:29:31.727 00:29:31.727 Suite: bdevio tests on: raid5f 00:29:31.727 Test: blockdev write read block ...passed 00:29:31.727 Test: blockdev write zeroes read block ...passed 00:29:31.727 Test: blockdev write zeroes read no split ...passed 00:29:31.727 Test: blockdev write zeroes read split ...passed 00:29:31.986 Test: blockdev write zeroes read split partial ...passed 00:29:31.986 Test: blockdev reset ...passed 00:29:31.986 Test: blockdev write read 8 blocks ...passed 00:29:31.986 Test: blockdev write read size > 128k ...passed 00:29:31.986 Test: blockdev write read invalid size ...passed 00:29:31.986 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:31.986 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:31.986 Test: blockdev write read max offset ...passed 00:29:31.986 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:31.986 Test: blockdev writev readv 8 blocks ...passed 00:29:31.986 Test: blockdev writev readv 30 x 1block ...passed 00:29:31.986 Test: blockdev writev readv block ...passed 00:29:31.986 Test: blockdev writev readv size > 128k ...passed 00:29:31.986 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:31.986 Test: blockdev comparev and writev ...passed 00:29:31.986 Test: blockdev nvme passthru rw ...passed 00:29:31.986 Test: blockdev nvme passthru vendor specific ...passed 00:29:31.986 Test: blockdev nvme admin passthru ...passed 00:29:31.986 Test: blockdev copy ...passed 00:29:31.986 00:29:31.986 Run Summary: Type Total Ran Passed Failed Inactive 00:29:31.986 suites 1 1 n/a 0 0 00:29:31.986 tests 23 23 23 0 0 00:29:31.986 asserts 130 130 130 0 n/a 00:29:31.986 00:29:31.986 Elapsed time = 0.486 seconds 00:29:31.986 0 00:29:31.987 21:51:52 -- bdev/blockdev.sh@293 -- # killprocess 94551 00:29:31.987 21:51:52 -- common/autotest_common.sh@936 -- # '[' -z 94551 ']' 00:29:31.987 21:51:52 -- common/autotest_common.sh@940 -- # kill -0 94551 00:29:31.987 21:51:52 -- common/autotest_common.sh@941 -- # uname 00:29:31.987 21:51:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:31.987 21:51:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94551 00:29:31.987 21:51:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:31.987 killing process with pid 94551 00:29:31.987 21:51:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:31.987 21:51:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94551' 00:29:31.987 21:51:52 -- common/autotest_common.sh@955 -- # kill 94551 00:29:31.987 21:51:52 -- common/autotest_common.sh@960 -- # wait 94551 00:29:33.365 ************************************ 00:29:33.365 END TEST bdev_bounds 00:29:33.365 ************************************ 00:29:33.365 21:51:53 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:33.365 00:29:33.365 real 0m2.660s 00:29:33.365 user 0m6.468s 00:29:33.365 sys 0m0.362s 00:29:33.365 21:51:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:33.365 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:29:33.365 21:51:53 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:33.365 21:51:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:29:33.365 21:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:33.365 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:29:33.365 ************************************ 00:29:33.365 START TEST bdev_nbd 00:29:33.365 ************************************ 00:29:33.365 21:51:53 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:29:33.365 21:51:53 -- bdev/blockdev.sh@298 -- # uname -s 00:29:33.365 21:51:53 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:33.365 21:51:53 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:33.365 21:51:53 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:33.365 21:51:53 -- bdev/blockdev.sh@302 -- # bdev_all=('raid5f') 00:29:33.365 21:51:53 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:33.365 21:51:53 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:33.365 21:51:53 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:33.365 21:51:53 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:29:33.365 21:51:53 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:33.365 21:51:53 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:33.365 21:51:53 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0') 00:29:33.365 21:51:53 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:33.365 21:51:53 -- bdev/blockdev.sh@313 -- # bdev_list=('raid5f') 00:29:33.365 21:51:53 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:33.365 21:51:53 -- bdev/blockdev.sh@316 -- # nbd_pid=94611 00:29:33.365 21:51:53 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:33.365 21:51:53 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:33.365 21:51:53 -- bdev/blockdev.sh@318 -- # waitforlisten 94611 /var/tmp/spdk-nbd.sock 00:29:33.365 21:51:53 -- common/autotest_common.sh@829 -- # '[' -z 94611 ']' 00:29:33.365 21:51:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:33.365 21:51:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.365 21:51:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:33.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:33.365 21:51:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.365 21:51:53 -- common/autotest_common.sh@10 -- # set +x 00:29:33.365 [2024-12-06 21:51:53.677618] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:33.365 [2024-12-06 21:51:53.677943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.365 [2024-12-06 21:51:53.836099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.625 [2024-12-06 21:51:54.011227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.193 21:51:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:34.193 21:51:54 -- common/autotest_common.sh@862 -- # return 0 00:29:34.193 21:51:54 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:29:34.193 21:51:54 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.193 21:51:54 -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@24 -- # local i 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:34.194 21:51:54 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:34.453 21:51:54 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:34.453 21:51:54 -- common/autotest_common.sh@867 -- # local i 00:29:34.453 21:51:54 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:34.453 21:51:54 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:34.453 21:51:54 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:34.453 21:51:54 -- common/autotest_common.sh@871 -- # break 00:29:34.453 21:51:54 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:34.453 21:51:54 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:34.453 21:51:54 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:34.453 1+0 records in 00:29:34.453 1+0 records out 00:29:34.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289662 s, 14.1 MB/s 00:29:34.453 21:51:54 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.453 21:51:54 -- common/autotest_common.sh@884 -- # size=4096 00:29:34.453 21:51:54 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:34.453 21:51:54 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:34.453 21:51:54 -- common/autotest_common.sh@887 -- # return 0 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:34.453 21:51:54 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:34.712 { 00:29:34.712 "nbd_device": "/dev/nbd0", 00:29:34.712 "bdev_name": "raid5f" 00:29:34.712 } 00:29:34.712 ]' 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:34.712 { 00:29:34.712 "nbd_device": "/dev/nbd0", 00:29:34.712 "bdev_name": "raid5f" 00:29:34.712 } 00:29:34.712 ]' 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@51 -- # local i 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:34.712 21:51:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@41 -- # break 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@45 -- # return 0 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:34.970 21:51:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@65 -- # true 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@65 -- # count=0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@122 -- # count=0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@127 -- # return 0 00:29:35.229 21:51:55 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@12 -- # local i 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:29:35.229 /dev/nbd0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:35.229 21:51:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:35.229 21:51:55 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:29:35.229 21:51:55 -- common/autotest_common.sh@867 -- # local i 00:29:35.229 21:51:55 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:29:35.229 21:51:55 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:29:35.229 21:51:55 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:29:35.229 21:51:55 -- common/autotest_common.sh@871 -- # break 00:29:35.229 21:51:55 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:29:35.229 21:51:55 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:29:35.229 21:51:55 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:35.229 1+0 records in 00:29:35.229 1+0 records out 00:29:35.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280178 s, 14.6 MB/s 00:29:35.229 21:51:55 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.487 21:51:55 -- common/autotest_common.sh@884 -- # size=4096 00:29:35.487 21:51:55 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:35.487 21:51:55 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:29:35.487 21:51:55 -- common/autotest_common.sh@887 -- # return 0 00:29:35.487 21:51:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:35.487 21:51:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:35.487 21:51:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:35.487 21:51:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.487 21:51:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:35.746 { 00:29:35.746 "nbd_device": "/dev/nbd0", 00:29:35.746 "bdev_name": "raid5f" 00:29:35.746 } 00:29:35.746 ]' 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:35.746 { 00:29:35.746 "nbd_device": "/dev/nbd0", 00:29:35.746 "bdev_name": "raid5f" 00:29:35.746 } 00:29:35.746 ]' 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:35.746 21:51:55 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@65 -- # count=1 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@95 -- # count=1 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:35.746 256+0 records in 00:29:35.746 256+0 records out 00:29:35.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600988 s, 174 MB/s 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:35.746 256+0 records in 00:29:35.746 256+0 records out 00:29:35.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0365039 s, 28.7 MB/s 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:35.746 21:51:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@51 -- # local i 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:35.747 21:51:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@41 -- # break 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@45 -- # return 0 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.005 21:51:56 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@65 -- # true 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@65 -- # count=0 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@104 -- # count=0 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@109 -- # return 0 00:29:36.264 21:51:56 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0') 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:36.264 21:51:56 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:36.522 malloc_lvol_verify 00:29:36.522 21:51:56 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:36.779 17af6961-53c8-43e7-8cc0-5a7339116574 00:29:36.779 21:51:57 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:36.779 d4679c0c-c50a-4cd0-a137-26b9cf13da88 00:29:37.078 21:51:57 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:37.078 /dev/nbd0 00:29:37.078 21:51:57 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:37.078 mke2fs 1.47.0 (5-Feb-2023) 00:29:37.078 00:29:37.078 Filesystem too small for a journal 00:29:37.364 Discarding device blocks: 0/1024 done 00:29:37.364 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:37.364 00:29:37.364 Allocating group tables: 0/1 done 00:29:37.364 Writing inode tables: 0/1 done 00:29:37.364 Writing superblocks and filesystem accounting information: 0/1 done 00:29:37.364 00:29:37.364 21:51:57 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:37.364 21:51:57 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:37.364 21:51:57 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:37.364 21:51:57 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:29:37.364 21:51:57 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@51 -- # local i 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@41 -- # break 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@45 -- # return 0 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:37.365 21:51:57 -- bdev/nbd_common.sh@147 -- # return 0 00:29:37.365 21:51:57 -- bdev/blockdev.sh@324 -- # killprocess 94611 00:29:37.365 21:51:57 -- common/autotest_common.sh@936 -- # '[' -z 94611 ']' 00:29:37.365 21:51:57 -- common/autotest_common.sh@940 -- # kill -0 94611 00:29:37.365 21:51:57 -- common/autotest_common.sh@941 -- # uname 00:29:37.365 21:51:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:37.365 21:51:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94611 00:29:37.365 21:51:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:37.365 21:51:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:37.365 21:51:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94611' 00:29:37.365 killing process with pid 94611 00:29:37.365 21:51:57 -- common/autotest_common.sh@955 -- # kill 94611 00:29:37.365 21:51:57 -- common/autotest_common.sh@960 -- # wait 94611 00:29:38.741 21:51:59 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:38.741 00:29:38.741 real 0m5.403s 00:29:38.741 user 0m7.660s 00:29:38.741 sys 0m1.104s 00:29:38.741 21:51:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:38.742 ************************************ 00:29:38.742 END TEST bdev_nbd 00:29:38.742 21:51:59 -- common/autotest_common.sh@10 -- # set +x 00:29:38.742 ************************************ 00:29:38.742 21:51:59 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:38.742 21:51:59 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.742 21:51:59 -- common/autotest_common.sh@10 -- # set +x 00:29:38.742 ************************************ 00:29:38.742 START TEST bdev_fio 00:29:38.742 ************************************ 00:29:38.742 21:51:59 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@329 -- # local env_context 00:29:38.742 21:51:59 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:29:38.742 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:29:38.742 21:51:59 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:29:38.742 21:51:59 -- bdev/blockdev.sh@337 -- # echo '' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:29:38.742 21:51:59 -- bdev/blockdev.sh@337 -- # env_context= 00:29:38.742 21:51:59 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:38.742 21:51:59 -- common/autotest_common.sh@1270 -- # local workload=verify 00:29:38.742 21:51:59 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:29:38.742 21:51:59 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:38.742 21:51:59 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:38.742 21:51:59 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:38.742 21:51:59 -- common/autotest_common.sh@1290 -- # cat 00:29:38.742 21:51:59 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1303 -- # cat 00:29:38.742 21:51:59 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:29:38.742 21:51:59 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:29:38.742 21:51:59 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:29:38.742 21:51:59 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:29:38.742 21:51:59 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:29:38.742 21:51:59 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:29:38.742 21:51:59 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.742 21:51:59 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:38.742 21:51:59 -- common/autotest_common.sh@10 -- # set +x 00:29:38.742 ************************************ 00:29:38.742 START TEST bdev_fio_rw_verify 00:29:38.742 ************************************ 00:29:38.742 21:51:59 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.742 21:51:59 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:38.742 21:51:59 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:29:38.742 21:51:59 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:38.742 21:51:59 -- common/autotest_common.sh@1328 -- # local sanitizers 00:29:38.742 21:51:59 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:38.742 21:51:59 -- common/autotest_common.sh@1330 -- # shift 00:29:38.742 21:51:59 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:29:38.742 21:51:59 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:29:38.742 21:51:59 -- common/autotest_common.sh@1334 -- # grep libasan 00:29:38.742 21:51:59 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:29:38.742 21:51:59 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1334 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8 00:29:38.742 21:51:59 -- common/autotest_common.sh@1335 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]] 00:29:38.742 21:51:59 -- common/autotest_common.sh@1336 -- # break 00:29:38.742 21:51:59 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:29:38.742 21:51:59 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:29:39.001 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:29:39.001 fio-3.35 00:29:39.001 Starting 1 thread 00:29:51.214 00:29:51.214 job_raid5f: (groupid=0, jobs=1): err= 0: pid=94825: Fri Dec 6 21:52:10 2024 00:29:51.214 read: IOPS=9164, BW=35.8MiB/s (37.5MB/s)(358MiB/10001msec) 00:29:51.214 slat (usec): min=20, max=109, avg=27.06, stdev= 7.53 00:29:51.214 clat (usec): min=12, max=536, avg=173.11, stdev=70.14 00:29:51.214 lat (usec): min=37, max=596, avg=200.17, stdev=71.74 00:29:51.214 clat percentiles (usec): 00:29:51.214 | 50.000th=[ 169], 99.000th=[ 338], 99.900th=[ 412], 99.990th=[ 494], 00:29:51.214 | 99.999th=[ 537] 00:29:51.214 write: IOPS=9658, BW=37.7MiB/s (39.6MB/s)(373MiB/9877msec); 0 zone resets 00:29:51.214 slat (usec): min=10, max=240, avg=23.10, stdev= 7.75 00:29:51.214 clat (usec): min=67, max=1224, avg=388.13, stdev=68.10 00:29:51.214 lat (usec): min=87, max=1464, avg=411.23, stdev=70.57 00:29:51.214 clat percentiles (usec): 00:29:51.214 | 50.000th=[ 383], 99.000th=[ 570], 99.900th=[ 668], 99.990th=[ 1029], 00:29:51.214 | 99.999th=[ 1221] 00:29:51.214 bw ( KiB/s): min=33720, max=41392, per=98.85%, avg=38189.89, stdev=2467.14, samples=19 00:29:51.214 iops : min= 8430, max=10348, avg=9547.47, stdev=616.78, samples=19 00:29:51.214 lat (usec) : 20=0.01%, 50=0.01%, 100=8.93%, 250=33.32%, 500=54.82% 00:29:51.214 lat (usec) : 750=2.91%, 1000=0.01% 00:29:51.214 lat (msec) : 2=0.01% 00:29:51.214 cpu : usr=99.32%, sys=0.67%, ctx=25, majf=0, minf=7885 00:29:51.214 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:51.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.214 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:51.214 issued rwts: total=91655,95395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:51.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:51.214 00:29:51.214 Run status group 0 (all jobs): 00:29:51.214 READ: bw=35.8MiB/s (37.5MB/s), 35.8MiB/s-35.8MiB/s (37.5MB/s-37.5MB/s), io=358MiB (375MB), run=10001-10001msec 00:29:51.214 WRITE: bw=37.7MiB/s (39.6MB/s), 37.7MiB/s-37.7MiB/s (39.6MB/s-39.6MB/s), io=373MiB (391MB), run=9877-9877msec 00:29:51.214 ----------------------------------------------------- 00:29:51.214 Suppressions used: 00:29:51.214 count bytes template 00:29:51.214 1 7 /usr/src/fio/parse.c 00:29:51.214 878 84288 /usr/src/fio/iolog.c 00:29:51.214 1 904 libcrypto.so 00:29:51.214 ----------------------------------------------------- 00:29:51.214 00:29:51.214 00:29:51.214 real 0m12.276s 00:29:51.214 user 0m12.964s 00:29:51.214 sys 0m0.706s 00:29:51.214 21:52:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:51.214 ************************************ 00:29:51.214 END TEST bdev_fio_rw_verify 00:29:51.214 ************************************ 00:29:51.214 21:52:11 -- common/autotest_common.sh@10 -- # set +x 00:29:51.214 21:52:11 -- bdev/blockdev.sh@348 -- # rm -f 00:29:51.214 21:52:11 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.214 21:52:11 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:29:51.214 21:52:11 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.214 21:52:11 -- common/autotest_common.sh@1270 -- # local workload=trim 00:29:51.214 21:52:11 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:29:51.214 21:52:11 -- common/autotest_common.sh@1272 -- # local env_context= 00:29:51.214 21:52:11 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:29:51.214 21:52:11 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.215 21:52:11 -- common/autotest_common.sh@1290 -- # cat 00:29:51.215 21:52:11 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:29:51.215 21:52:11 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "17a451a1-8e98-4f99-96df-5ba5ae88156f"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "17a451a1-8e98-4f99-96df-5ba5ae88156f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "17a451a1-8e98-4f99-96df-5ba5ae88156f",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "02dc321a-b1c8-42f1-8371-371772087f09",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "b1be1725-510f-4b3d-a0f9-8724da60e772",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "505b48b1-3b28-42f2-b6b7-92ed77d33af9",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:29:51.215 21:52:11 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:29:51.215 21:52:11 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:29:51.215 21:52:11 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:29:51.215 /home/vagrant/spdk_repo/spdk 00:29:51.215 21:52:11 -- bdev/blockdev.sh@360 -- # popd 00:29:51.215 21:52:11 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:29:51.215 21:52:11 -- bdev/blockdev.sh@362 -- # return 0 00:29:51.215 00:29:51.215 real 0m12.400s 00:29:51.215 user 0m13.016s 00:29:51.215 sys 0m0.779s 00:29:51.215 21:52:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:51.215 21:52:11 -- common/autotest_common.sh@10 -- # set +x 00:29:51.215 ************************************ 00:29:51.215 END TEST bdev_fio 00:29:51.215 ************************************ 00:29:51.215 21:52:11 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:51.215 21:52:11 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:51.215 21:52:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:51.215 21:52:11 -- common/autotest_common.sh@10 -- # set +x 00:29:51.215 ************************************ 00:29:51.215 START TEST bdev_verify 00:29:51.215 ************************************ 00:29:51.215 21:52:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:51.215 [2024-12-06 21:52:11.583286] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:51.215 [2024-12-06 21:52:11.583509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94978 ] 00:29:51.474 [2024-12-06 21:52:11.745679] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.474 [2024-12-06 21:52:11.941163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.474 [2024-12-06 21:52:11.941180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.042 Running I/O for 5 seconds... 00:29:57.316 00:29:57.316 Latency(us) 00:29:57.316 [2024-12-06T21:52:17.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:57.316 [2024-12-06T21:52:17.813Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:57.316 Verification LBA range: start 0x0 length 0x2000 00:29:57.316 raid5f : 5.01 11061.04 43.21 0.00 0.00 18331.16 353.75 16205.27 00:29:57.316 [2024-12-06T21:52:17.813Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:57.316 Verification LBA range: start 0x2000 length 0x2000 00:29:57.316 raid5f : 5.01 11035.29 43.11 0.00 0.00 18375.13 176.87 16324.42 00:29:57.316 [2024-12-06T21:52:17.813Z] =================================================================================================================== 00:29:57.316 [2024-12-06T21:52:17.813Z] Total : 22096.32 86.31 0.00 0.00 18353.12 176.87 16324.42 00:29:58.694 00:29:58.694 real 0m7.266s 00:29:58.694 user 0m13.305s 00:29:58.694 sys 0m0.283s 00:29:58.694 21:52:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:58.694 ************************************ 00:29:58.694 21:52:18 -- common/autotest_common.sh@10 -- # set +x 00:29:58.694 END TEST bdev_verify 00:29:58.694 ************************************ 00:29:58.694 21:52:18 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:58.694 21:52:18 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:29:58.694 21:52:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:58.694 21:52:18 -- common/autotest_common.sh@10 -- # set +x 00:29:58.694 ************************************ 00:29:58.694 START TEST bdev_verify_big_io 00:29:58.694 ************************************ 00:29:58.694 21:52:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:58.694 [2024-12-06 21:52:18.894173] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:29:58.694 [2024-12-06 21:52:18.894335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95076 ] 00:29:58.694 [2024-12-06 21:52:19.058959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:58.953 [2024-12-06 21:52:19.288175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:58.953 [2024-12-06 21:52:19.288182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:59.521 Running I/O for 5 seconds... 00:30:04.796 00:30:04.796 Latency(us) 00:30:04.796 [2024-12-06T21:52:25.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.796 [2024-12-06T21:52:25.293Z] Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:04.796 Verification LBA range: start 0x0 length 0x200 00:30:04.796 raid5f : 5.14 683.12 42.69 0.00 0.00 4875734.19 177.80 150613.64 00:30:04.796 [2024-12-06T21:52:25.293Z] Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:04.796 Verification LBA range: start 0x200 length 0x200 00:30:04.796 raid5f : 5.14 682.94 42.68 0.00 0.00 4878494.86 255.07 153473.40 00:30:04.796 [2024-12-06T21:52:25.293Z] =================================================================================================================== 00:30:04.796 [2024-12-06T21:52:25.293Z] Total : 1366.06 85.38 0.00 0.00 4877114.13 177.80 153473.40 00:30:06.172 00:30:06.172 real 0m7.446s 00:30:06.172 user 0m13.628s 00:30:06.172 sys 0m0.269s 00:30:06.172 21:52:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:06.172 21:52:26 -- common/autotest_common.sh@10 -- # set +x 00:30:06.172 ************************************ 00:30:06.172 END TEST bdev_verify_big_io 00:30:06.172 ************************************ 00:30:06.172 21:52:26 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:06.172 21:52:26 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:06.172 21:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:06.172 21:52:26 -- common/autotest_common.sh@10 -- # set +x 00:30:06.172 ************************************ 00:30:06.172 START TEST bdev_write_zeroes 00:30:06.172 ************************************ 00:30:06.172 21:52:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:06.172 [2024-12-06 21:52:26.405419] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:06.172 [2024-12-06 21:52:26.405617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95165 ] 00:30:06.172 [2024-12-06 21:52:26.577517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.431 [2024-12-06 21:52:26.771434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.998 Running I/O for 1 seconds... 00:30:07.934 00:30:07.934 Latency(us) 00:30:07.934 [2024-12-06T21:52:28.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:07.934 [2024-12-06T21:52:28.431Z] Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:07.934 raid5f : 1.01 19817.42 77.41 0.00 0.00 6432.95 1824.58 8221.79 00:30:07.934 [2024-12-06T21:52:28.431Z] =================================================================================================================== 00:30:07.934 [2024-12-06T21:52:28.431Z] Total : 19817.42 77.41 0.00 0.00 6432.95 1824.58 8221.79 00:30:09.312 00:30:09.312 real 0m3.340s 00:30:09.312 user 0m2.976s 00:30:09.312 sys 0m0.251s 00:30:09.312 21:52:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:09.312 ************************************ 00:30:09.312 END TEST bdev_write_zeroes 00:30:09.312 21:52:29 -- common/autotest_common.sh@10 -- # set +x 00:30:09.312 ************************************ 00:30:09.312 21:52:29 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:09.312 21:52:29 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:09.312 21:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:09.312 21:52:29 -- common/autotest_common.sh@10 -- # set +x 00:30:09.312 ************************************ 00:30:09.312 START TEST bdev_json_nonenclosed 00:30:09.312 ************************************ 00:30:09.312 21:52:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:09.312 [2024-12-06 21:52:29.792321] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:09.312 [2024-12-06 21:52:29.792510] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95218 ] 00:30:09.571 [2024-12-06 21:52:29.964860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.831 [2024-12-06 21:52:30.158629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.831 [2024-12-06 21:52:30.158831] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:09.831 [2024-12-06 21:52:30.158854] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:10.090 00:30:10.090 real 0m0.813s 00:30:10.090 user 0m0.581s 00:30:10.090 sys 0m0.131s 00:30:10.090 21:52:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:10.090 21:52:30 -- common/autotest_common.sh@10 -- # set +x 00:30:10.090 ************************************ 00:30:10.090 END TEST bdev_json_nonenclosed 00:30:10.090 ************************************ 00:30:10.349 21:52:30 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:10.349 21:52:30 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:30:10.349 21:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:10.349 21:52:30 -- common/autotest_common.sh@10 -- # set +x 00:30:10.349 ************************************ 00:30:10.349 START TEST bdev_json_nonarray 00:30:10.349 ************************************ 00:30:10.349 21:52:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:10.349 [2024-12-06 21:52:30.660724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:30:10.349 [2024-12-06 21:52:30.660898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95244 ] 00:30:10.349 [2024-12-06 21:52:30.831961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.608 [2024-12-06 21:52:31.016518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.608 [2024-12-06 21:52:31.016705] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:10.608 [2024-12-06 21:52:31.016728] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:11.176 00:30:11.176 real 0m0.835s 00:30:11.176 user 0m0.603s 00:30:11.176 sys 0m0.132s 00:30:11.176 21:52:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:11.176 21:52:31 -- common/autotest_common.sh@10 -- # set +x 00:30:11.176 ************************************ 00:30:11.176 END TEST bdev_json_nonarray 00:30:11.176 ************************************ 00:30:11.176 21:52:31 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:11.176 21:52:31 -- bdev/blockdev.sh@809 -- # cleanup 00:30:11.176 21:52:31 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:11.176 21:52:31 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:11.176 21:52:31 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:30:11.176 21:52:31 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:30:11.176 00:30:11.176 real 0m46.830s 00:30:11.176 user 1m4.303s 00:30:11.176 sys 0m4.424s 00:30:11.176 21:52:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:11.176 21:52:31 -- common/autotest_common.sh@10 -- # set +x 00:30:11.176 ************************************ 00:30:11.176 END TEST blockdev_raid5f 00:30:11.176 ************************************ 00:30:11.176 21:52:31 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:11.176 21:52:31 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:30:11.176 21:52:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:11.176 21:52:31 -- common/autotest_common.sh@10 -- # set +x 00:30:11.176 21:52:31 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:30:11.176 21:52:31 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:30:11.176 21:52:31 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:30:11.176 21:52:31 -- common/autotest_common.sh@10 -- # set +x 00:30:13.081 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:30:13.081 Waiting for block devices as requested 00:30:13.081 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:13.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev 00:30:13.599 Cleaning 00:30:13.599 Removing: /var/run/dpdk/spdk0/config 00:30:13.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:13.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:13.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:13.599 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:13.599 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:13.599 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:13.599 Removing: /dev/shm/spdk_tgt_trace.pid60431 00:30:13.599 Removing: /var/run/dpdk/spdk0 00:30:13.599 Removing: /var/run/dpdk/spdk_pid60219 00:30:13.599 Removing: /var/run/dpdk/spdk_pid60431 00:30:13.599 Removing: /var/run/dpdk/spdk_pid60709 00:30:13.599 Removing: /var/run/dpdk/spdk_pid60955 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61137 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61243 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61351 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61480 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61589 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61623 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61665 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61740 00:30:13.599 Removing: /var/run/dpdk/spdk_pid61846 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62347 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62419 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62495 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62524 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62658 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62687 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62826 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62850 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62914 00:30:13.599 Removing: /var/run/dpdk/spdk_pid62945 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63009 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63029 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63220 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63262 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63298 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63382 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63464 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63496 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63569 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63595 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63641 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63673 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63714 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63740 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63787 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63818 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63865 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63891 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63932 00:30:13.600 Removing: /var/run/dpdk/spdk_pid63969 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64010 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64042 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64083 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64113 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64161 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64187 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64228 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64260 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64301 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64327 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64369 00:30:13.600 Removing: /var/run/dpdk/spdk_pid64405 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64449 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64475 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64522 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64548 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64589 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64621 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64666 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64693 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64740 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64769 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64813 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64848 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64897 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64923 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64970 00:30:13.859 Removing: /var/run/dpdk/spdk_pid64996 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65038 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65127 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65245 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65432 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65505 00:30:13.859 Removing: /var/run/dpdk/spdk_pid65558 00:30:13.859 Removing: /var/run/dpdk/spdk_pid66757 00:30:13.859 Removing: /var/run/dpdk/spdk_pid66962 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67149 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67266 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67392 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67455 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67482 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67507 00:30:13.859 Removing: /var/run/dpdk/spdk_pid67926 00:30:13.859 Removing: /var/run/dpdk/spdk_pid68008 00:30:13.859 Removing: /var/run/dpdk/spdk_pid68115 00:30:13.859 Removing: /var/run/dpdk/spdk_pid68173 00:30:13.859 Removing: /var/run/dpdk/spdk_pid69277 00:30:13.859 Removing: /var/run/dpdk/spdk_pid70099 00:30:13.859 Removing: /var/run/dpdk/spdk_pid70898 00:30:13.859 Removing: /var/run/dpdk/spdk_pid71913 00:30:13.859 Removing: /var/run/dpdk/spdk_pid72882 00:30:13.859 Removing: /var/run/dpdk/spdk_pid73854 00:30:13.859 Removing: /var/run/dpdk/spdk_pid75193 00:30:13.859 Removing: /var/run/dpdk/spdk_pid76284 00:30:13.859 Removing: /var/run/dpdk/spdk_pid77377 00:30:13.859 Removing: /var/run/dpdk/spdk_pid77985 00:30:13.859 Removing: /var/run/dpdk/spdk_pid78486 00:30:13.859 Removing: /var/run/dpdk/spdk_pid79077 00:30:13.859 Removing: /var/run/dpdk/spdk_pid79516 00:30:13.859 Removing: /var/run/dpdk/spdk_pid80005 00:30:13.859 Removing: /var/run/dpdk/spdk_pid80513 00:30:13.859 Removing: /var/run/dpdk/spdk_pid81110 00:30:13.859 Removing: /var/run/dpdk/spdk_pid81585 00:30:13.859 Removing: /var/run/dpdk/spdk_pid82798 00:30:13.859 Removing: /var/run/dpdk/spdk_pid83335 00:30:13.859 Removing: /var/run/dpdk/spdk_pid83818 00:30:13.859 Removing: /var/run/dpdk/spdk_pid85160 00:30:13.859 Removing: /var/run/dpdk/spdk_pid85753 00:30:13.859 Removing: /var/run/dpdk/spdk_pid86317 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87014 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87055 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87106 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87161 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87295 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87438 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87664 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87941 00:30:13.859 Removing: /var/run/dpdk/spdk_pid87960 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88003 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88022 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88048 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88078 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88097 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88123 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88147 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88172 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88192 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88222 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88247 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88271 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88297 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88322 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88343 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88372 00:30:13.859 Removing: /var/run/dpdk/spdk_pid88397 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88418 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88464 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88483 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88523 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88605 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88638 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88659 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88699 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88722 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88736 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88789 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88808 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88851 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88865 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88885 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88899 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88925 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88939 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88959 00:30:14.119 Removing: /var/run/dpdk/spdk_pid88973 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89016 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89050 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89072 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89107 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89133 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89148 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89201 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89219 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89258 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89276 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89291 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89311 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89325 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89349 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89364 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89384 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89472 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89548 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89687 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89713 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89752 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89803 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89841 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89866 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89894 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89930 00:30:14.119 Removing: /var/run/dpdk/spdk_pid89957 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90044 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90090 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90138 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90381 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90487 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90522 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90615 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90686 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90723 00:30:14.119 Removing: /var/run/dpdk/spdk_pid90954 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91097 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91185 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91234 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91264 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91350 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91747 00:30:14.119 Removing: /var/run/dpdk/spdk_pid91783 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92064 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92167 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92265 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92307 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92334 00:30:14.119 Removing: /var/run/dpdk/spdk_pid92365 00:30:14.119 Removing: /var/run/dpdk/spdk_pid93575 00:30:14.119 Removing: /var/run/dpdk/spdk_pid93710 00:30:14.119 Removing: /var/run/dpdk/spdk_pid93715 00:30:14.119 Removing: /var/run/dpdk/spdk_pid93738 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94192 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94301 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94445 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94509 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94551 00:30:14.119 Removing: /var/run/dpdk/spdk_pid94814 00:30:14.379 Removing: /var/run/dpdk/spdk_pid94978 00:30:14.379 Removing: /var/run/dpdk/spdk_pid95076 00:30:14.379 Removing: /var/run/dpdk/spdk_pid95165 00:30:14.379 Removing: /var/run/dpdk/spdk_pid95218 00:30:14.379 Removing: /var/run/dpdk/spdk_pid95244 00:30:14.379 Clean 00:30:14.379 killing process with pid 51387 00:30:14.379 killing process with pid 51394 00:30:14.379 21:52:34 -- common/autotest_common.sh@1446 -- # return 0 00:30:14.379 21:52:34 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:14.379 21:52:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.379 21:52:34 -- common/autotest_common.sh@10 -- # set +x 00:30:14.379 21:52:34 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:14.379 21:52:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:14.379 21:52:34 -- common/autotest_common.sh@10 -- # set +x 00:30:14.379 21:52:34 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:30:14.638 21:52:34 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:30:14.638 21:52:34 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:30:14.638 21:52:34 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:14.638 21:52:34 -- spdk/autotest.sh@383 -- # hostname 00:30:14.638 21:52:34 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:30:14.638 geninfo: WARNING: invalid characters removed from testname! 00:31:10.865 21:53:27 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:13.445 21:53:33 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:17.624 21:53:37 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:20.910 21:53:40 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:24.211 21:53:44 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:27.496 21:53:47 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:31:30.788 21:53:50 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:30.788 21:53:51 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:31:30.788 21:53:51 -- common/autotest_common.sh@1690 -- $ lcov --version 00:31:30.788 21:53:51 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:31:30.788 21:53:51 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:31:30.788 21:53:51 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:31:30.788 21:53:51 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:31:30.788 21:53:51 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:31:30.788 21:53:51 -- scripts/common.sh@335 -- $ IFS=.-: 00:31:30.788 21:53:51 -- scripts/common.sh@335 -- $ read -ra ver1 00:31:30.788 21:53:51 -- scripts/common.sh@336 -- $ IFS=.-: 00:31:30.788 21:53:51 -- scripts/common.sh@336 -- $ read -ra ver2 00:31:30.788 21:53:51 -- scripts/common.sh@337 -- $ local 'op=<' 00:31:30.788 21:53:51 -- scripts/common.sh@339 -- $ ver1_l=2 00:31:30.788 21:53:51 -- scripts/common.sh@340 -- $ ver2_l=1 00:31:30.788 21:53:51 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:31:30.788 21:53:51 -- scripts/common.sh@343 -- $ case "$op" in 00:31:30.788 21:53:51 -- scripts/common.sh@344 -- $ : 1 00:31:30.788 21:53:51 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:31:30.788 21:53:51 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:30.788 21:53:51 -- scripts/common.sh@364 -- $ decimal 1 00:31:30.788 21:53:51 -- scripts/common.sh@352 -- $ local d=1 00:31:30.788 21:53:51 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:31:30.788 21:53:51 -- scripts/common.sh@354 -- $ echo 1 00:31:30.788 21:53:51 -- scripts/common.sh@364 -- $ ver1[v]=1 00:31:30.788 21:53:51 -- scripts/common.sh@365 -- $ decimal 2 00:31:30.788 21:53:51 -- scripts/common.sh@352 -- $ local d=2 00:31:30.788 21:53:51 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:31:30.788 21:53:51 -- scripts/common.sh@354 -- $ echo 2 00:31:30.788 21:53:51 -- scripts/common.sh@365 -- $ ver2[v]=2 00:31:30.788 21:53:51 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:31:30.788 21:53:51 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:31:30.788 21:53:51 -- scripts/common.sh@367 -- $ return 0 00:31:30.788 21:53:51 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:30.788 21:53:51 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:31:30.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.788 --rc genhtml_branch_coverage=1 00:31:30.788 --rc genhtml_function_coverage=1 00:31:30.788 --rc genhtml_legend=1 00:31:30.788 --rc geninfo_all_blocks=1 00:31:30.788 --rc geninfo_unexecuted_blocks=1 00:31:30.788 00:31:30.788 ' 00:31:30.788 21:53:51 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:31:30.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.788 --rc genhtml_branch_coverage=1 00:31:30.788 --rc genhtml_function_coverage=1 00:31:30.788 --rc genhtml_legend=1 00:31:30.788 --rc geninfo_all_blocks=1 00:31:30.788 --rc geninfo_unexecuted_blocks=1 00:31:30.788 00:31:30.788 ' 00:31:30.788 21:53:51 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:31:30.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.788 --rc genhtml_branch_coverage=1 00:31:30.788 --rc genhtml_function_coverage=1 00:31:30.788 --rc genhtml_legend=1 00:31:30.788 --rc geninfo_all_blocks=1 00:31:30.788 --rc geninfo_unexecuted_blocks=1 00:31:30.788 00:31:30.788 ' 00:31:30.788 21:53:51 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:31:30.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:30.788 --rc genhtml_branch_coverage=1 00:31:30.788 --rc genhtml_function_coverage=1 00:31:30.788 --rc genhtml_legend=1 00:31:30.788 --rc geninfo_all_blocks=1 00:31:30.788 --rc geninfo_unexecuted_blocks=1 00:31:30.788 00:31:30.788 ' 00:31:30.788 21:53:51 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:30.788 21:53:51 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:30.788 21:53:51 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:30.788 21:53:51 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:30.788 21:53:51 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:30.788 21:53:51 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:30.789 21:53:51 -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:30.789 21:53:51 -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:30.789 21:53:51 -- paths/export.sh@6 -- $ export PATH 00:31:30.789 21:53:51 -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin 00:31:30.789 21:53:51 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:31:30.789 21:53:51 -- common/autobuild_common.sh@440 -- $ date +%s 00:31:30.789 21:53:51 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1733522031.XXXXXX 00:31:30.789 21:53:51 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1733522031.Qp0rAy 00:31:30.789 21:53:51 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:31:30.789 21:53:51 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:31:30.789 21:53:51 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:31:30.789 21:53:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:31:30.789 21:53:51 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:31:30.789 21:53:51 -- common/autobuild_common.sh@456 -- $ get_config_params 00:31:30.789 21:53:51 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:30.789 21:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:31:30.789 21:53:51 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:31:30.789 21:53:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:31:30.789 21:53:51 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:31:30.789 21:53:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:30.789 21:53:51 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:30.789 21:53:51 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:31:30.789 21:53:51 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:31:30.789 21:53:51 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:31:30.789 21:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:31:30.789 21:53:51 -- spdk/autopackage.sh@26 -- $ [[ '' == *clang* ]] 00:31:30.789 21:53:51 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:31:30.789 21:53:51 -- spdk/autopackage.sh@40 -- $ get_config_params 00:31:30.789 21:53:51 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:31:30.789 21:53:51 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:31:30.789 21:53:51 -- common/autotest_common.sh@10 -- $ set +x 00:31:30.789 21:53:51 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:31:30.789 21:53:51 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --enable-lto 00:31:30.789 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:31:30.789 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:31:31.357 Using 'verbs' RDMA provider 00:31:44.131 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:31:56.338 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:31:56.338 Creating mk/config.mk...done. 00:31:56.338 Creating mk/cc.flags.mk...done. 00:31:56.338 Type 'make' to build. 00:31:56.338 21:54:15 -- spdk/autopackage.sh@43 -- $ make -j10 00:31:56.338 make[1]: Nothing to be done for 'all'. 00:32:00.526 The Meson build system 00:32:00.526 Version: 1.4.1 00:32:00.526 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:32:00.526 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:32:00.526 Build type: native build 00:32:00.526 Program cat found: YES (/usr/bin/cat) 00:32:00.526 Project name: DPDK 00:32:00.526 Project version: 23.11.0 00:32:00.526 C compiler for the host machine: cc (gcc 13.2.0 "cc (Ubuntu 13.2.0-23ubuntu4) 13.2.0") 00:32:00.526 C linker for the host machine: cc ld.bfd 2.42 00:32:00.526 Host machine cpu family: x86_64 00:32:00.526 Host machine cpu: x86_64 00:32:00.526 Message: ## Building in Developer Mode ## 00:32:00.526 Program pkg-config found: YES (/usr/bin/pkg-config) 00:32:00.526 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:32:00.526 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:32:00.526 Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3) 00:32:00.526 Program cat found: YES (/usr/bin/cat) 00:32:00.526 Compiler for C supports arguments -march=native: YES 00:32:00.526 Checking for size of "void *" : 8 00:32:00.526 Checking for size of "void *" : 8 (cached) 00:32:00.526 Library m found: YES 00:32:00.526 Library numa found: YES 00:32:00.526 Has header "numaif.h" : YES 00:32:00.526 Library fdt found: NO 00:32:00.526 Library execinfo found: NO 00:32:00.526 Has header "execinfo.h" : YES 00:32:00.526 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1 00:32:00.526 Run-time dependency libarchive found: NO (tried pkgconfig) 00:32:00.526 Run-time dependency libbsd found: NO (tried pkgconfig) 00:32:00.526 Run-time dependency jansson found: NO (tried pkgconfig) 00:32:00.526 Run-time dependency openssl found: YES 3.0.13 00:32:00.526 Run-time dependency libpcap found: NO (tried pkgconfig) 00:32:00.526 Library pcap found: NO 00:32:00.526 Compiler for C supports arguments -Wcast-qual: YES 00:32:00.526 Compiler for C supports arguments -Wdeprecated: YES 00:32:00.526 Compiler for C supports arguments -Wformat: YES 00:32:00.526 Compiler for C supports arguments -Wformat-nonliteral: YES 00:32:00.526 Compiler for C supports arguments -Wformat-security: YES 00:32:00.526 Compiler for C supports arguments -Wmissing-declarations: YES 00:32:00.526 Compiler for C supports arguments -Wmissing-prototypes: YES 00:32:00.526 Compiler for C supports arguments -Wnested-externs: YES 00:32:00.526 Compiler for C supports arguments -Wold-style-definition: YES 00:32:00.526 Compiler for C supports arguments -Wpointer-arith: YES 00:32:00.526 Compiler for C supports arguments -Wsign-compare: YES 00:32:00.526 Compiler for C supports arguments -Wstrict-prototypes: YES 00:32:00.527 Compiler for C supports arguments -Wundef: YES 00:32:00.527 Compiler for C supports arguments -Wwrite-strings: YES 00:32:00.527 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:32:00.527 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:32:00.527 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:32:00.527 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:32:00.527 Program objdump found: YES (/usr/bin/objdump) 00:32:00.527 Compiler for C supports arguments -mavx512f: YES 00:32:00.527 Checking if "AVX512 checking" compiles: YES 00:32:00.527 Fetching value of define "__SSE4_2__" : 1 00:32:00.527 Fetching value of define "__AES__" : 1 00:32:00.527 Fetching value of define "__AVX__" : 1 00:32:00.527 Fetching value of define "__AVX2__" : 1 00:32:00.527 Fetching value of define "__AVX512BW__" : (undefined) 00:32:00.527 Fetching value of define "__AVX512CD__" : (undefined) 00:32:00.527 Fetching value of define "__AVX512DQ__" : (undefined) 00:32:00.527 Fetching value of define "__AVX512F__" : (undefined) 00:32:00.527 Fetching value of define "__AVX512VL__" : (undefined) 00:32:00.527 Fetching value of define "__PCLMUL__" : 1 00:32:00.527 Fetching value of define "__RDRND__" : 1 00:32:00.527 Fetching value of define "__RDSEED__" : 1 00:32:00.527 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:32:00.527 Fetching value of define "__znver1__" : (undefined) 00:32:00.527 Fetching value of define "__znver2__" : (undefined) 00:32:00.527 Fetching value of define "__znver3__" : (undefined) 00:32:00.527 Fetching value of define "__znver4__" : (undefined) 00:32:00.527 Compiler for C supports arguments -ffat-lto-objects: YES 00:32:00.527 Library asan found: YES 00:32:00.527 Compiler for C supports arguments -Wno-format-truncation: YES 00:32:00.527 Message: lib/log: Defining dependency "log" 00:32:00.527 Message: lib/kvargs: Defining dependency "kvargs" 00:32:00.527 Message: lib/telemetry: Defining dependency "telemetry" 00:32:00.527 Library rt found: YES 00:32:00.527 Checking for function "getentropy" : NO 00:32:00.527 Message: lib/eal: Defining dependency "eal" 00:32:00.527 Message: lib/ring: Defining dependency "ring" 00:32:00.527 Message: lib/rcu: Defining dependency "rcu" 00:32:00.527 Message: lib/mempool: Defining dependency "mempool" 00:32:00.527 Message: lib/mbuf: Defining dependency "mbuf" 00:32:00.527 Fetching value of define "__PCLMUL__" : 1 (cached) 00:32:00.527 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:32:00.527 Compiler for C supports arguments -mpclmul: YES 00:32:00.527 Compiler for C supports arguments -maes: YES 00:32:00.527 Compiler for C supports arguments -mavx512f: YES (cached) 00:32:00.527 Compiler for C supports arguments -mavx512bw: YES 00:32:00.527 Compiler for C supports arguments -mavx512dq: YES 00:32:00.527 Compiler for C supports arguments -mavx512vl: YES 00:32:00.527 Compiler for C supports arguments -mvpclmulqdq: YES 00:32:00.527 Compiler for C supports arguments -mavx2: YES 00:32:00.527 Compiler for C supports arguments -mavx: YES 00:32:00.527 Message: lib/net: Defining dependency "net" 00:32:00.527 Message: lib/meter: Defining dependency "meter" 00:32:00.527 Message: lib/ethdev: Defining dependency "ethdev" 00:32:00.527 Message: lib/pci: Defining dependency "pci" 00:32:00.527 Message: lib/cmdline: Defining dependency "cmdline" 00:32:00.527 Message: lib/hash: Defining dependency "hash" 00:32:00.527 Message: lib/timer: Defining dependency "timer" 00:32:00.527 Message: lib/compressdev: Defining dependency "compressdev" 00:32:00.527 Message: lib/cryptodev: Defining dependency "cryptodev" 00:32:00.527 Message: lib/dmadev: Defining dependency "dmadev" 00:32:00.527 Compiler for C supports arguments -Wno-cast-qual: YES 00:32:00.527 Message: lib/power: Defining dependency "power" 00:32:00.527 Message: lib/reorder: Defining dependency "reorder" 00:32:00.527 Message: lib/security: Defining dependency "security" 00:32:00.527 Has header "linux/userfaultfd.h" : YES 00:32:00.527 Has header "linux/vduse.h" : YES 00:32:00.527 Message: lib/vhost: Defining dependency "vhost" 00:32:00.527 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:32:00.527 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:32:00.527 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:32:00.527 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:32:00.527 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:32:00.527 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:32:00.527 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:32:00.527 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:32:00.527 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:32:00.527 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:32:00.527 Program doxygen found: YES (/usr/bin/doxygen) 00:32:00.527 Configuring doxy-api-html.conf using configuration 00:32:00.527 Configuring doxy-api-man.conf using configuration 00:32:00.527 Program mandb found: YES (/usr/bin/mandb) 00:32:00.527 Program sphinx-build found: NO 00:32:00.527 Configuring rte_build_config.h using configuration 00:32:00.527 Message: 00:32:00.527 ================= 00:32:00.527 Applications Enabled 00:32:00.527 ================= 00:32:00.527 00:32:00.527 apps: 00:32:00.527 00:32:00.527 00:32:00.527 Message: 00:32:00.527 ================= 00:32:00.527 Libraries Enabled 00:32:00.527 ================= 00:32:00.527 00:32:00.527 libs: 00:32:00.527 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:32:00.527 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:32:00.527 cryptodev, dmadev, power, reorder, security, vhost, 00:32:00.527 00:32:00.527 Message: 00:32:00.527 =============== 00:32:00.527 Drivers Enabled 00:32:00.527 =============== 00:32:00.527 00:32:00.527 common: 00:32:00.527 00:32:00.527 bus: 00:32:00.527 pci, vdev, 00:32:00.527 mempool: 00:32:00.527 ring, 00:32:00.527 dma: 00:32:00.527 00:32:00.527 net: 00:32:00.527 00:32:00.527 crypto: 00:32:00.527 00:32:00.527 compress: 00:32:00.527 00:32:00.527 vdpa: 00:32:00.527 00:32:00.527 00:32:00.527 Message: 00:32:00.527 ================= 00:32:00.527 Content Skipped 00:32:00.527 ================= 00:32:00.527 00:32:00.527 apps: 00:32:00.527 dumpcap: explicitly disabled via build config 00:32:00.527 graph: explicitly disabled via build config 00:32:00.527 pdump: explicitly disabled via build config 00:32:00.527 proc-info: explicitly disabled via build config 00:32:00.527 test-acl: explicitly disabled via build config 00:32:00.527 test-bbdev: explicitly disabled via build config 00:32:00.527 test-cmdline: explicitly disabled via build config 00:32:00.527 test-compress-perf: explicitly disabled via build config 00:32:00.527 test-crypto-perf: explicitly disabled via build config 00:32:00.527 test-dma-perf: explicitly disabled via build config 00:32:00.527 test-eventdev: explicitly disabled via build config 00:32:00.527 test-fib: explicitly disabled via build config 00:32:00.527 test-flow-perf: explicitly disabled via build config 00:32:00.527 test-gpudev: explicitly disabled via build config 00:32:00.527 test-mldev: explicitly disabled via build config 00:32:00.527 test-pipeline: explicitly disabled via build config 00:32:00.527 test-pmd: explicitly disabled via build config 00:32:00.527 test-regex: explicitly disabled via build config 00:32:00.527 test-sad: explicitly disabled via build config 00:32:00.527 test-security-perf: explicitly disabled via build config 00:32:00.527 00:32:00.527 libs: 00:32:00.527 metrics: explicitly disabled via build config 00:32:00.527 acl: explicitly disabled via build config 00:32:00.527 bbdev: explicitly disabled via build config 00:32:00.527 bitratestats: explicitly disabled via build config 00:32:00.527 bpf: explicitly disabled via build config 00:32:00.527 cfgfile: explicitly disabled via build config 00:32:00.527 distributor: explicitly disabled via build config 00:32:00.527 efd: explicitly disabled via build config 00:32:00.527 eventdev: explicitly disabled via build config 00:32:00.527 dispatcher: explicitly disabled via build config 00:32:00.527 gpudev: explicitly disabled via build config 00:32:00.527 gro: explicitly disabled via build config 00:32:00.527 gso: explicitly disabled via build config 00:32:00.527 ip_frag: explicitly disabled via build config 00:32:00.527 jobstats: explicitly disabled via build config 00:32:00.527 latencystats: explicitly disabled via build config 00:32:00.527 lpm: explicitly disabled via build config 00:32:00.527 member: explicitly disabled via build config 00:32:00.527 pcapng: explicitly disabled via build config 00:32:00.527 rawdev: explicitly disabled via build config 00:32:00.527 regexdev: explicitly disabled via build config 00:32:00.527 mldev: explicitly disabled via build config 00:32:00.527 rib: explicitly disabled via build config 00:32:00.527 sched: explicitly disabled via build config 00:32:00.527 stack: explicitly disabled via build config 00:32:00.527 ipsec: explicitly disabled via build config 00:32:00.527 pdcp: explicitly disabled via build config 00:32:00.527 fib: explicitly disabled via build config 00:32:00.527 port: explicitly disabled via build config 00:32:00.527 pdump: explicitly disabled via build config 00:32:00.527 table: explicitly disabled via build config 00:32:00.527 pipeline: explicitly disabled via build config 00:32:00.527 graph: explicitly disabled via build config 00:32:00.527 node: explicitly disabled via build config 00:32:00.527 00:32:00.527 drivers: 00:32:00.527 common/cpt: not in enabled drivers build config 00:32:00.527 common/dpaax: not in enabled drivers build config 00:32:00.527 common/iavf: not in enabled drivers build config 00:32:00.527 common/idpf: not in enabled drivers build config 00:32:00.527 common/mvep: not in enabled drivers build config 00:32:00.527 common/octeontx: not in enabled drivers build config 00:32:00.527 bus/auxiliary: not in enabled drivers build config 00:32:00.527 bus/cdx: not in enabled drivers build config 00:32:00.527 bus/dpaa: not in enabled drivers build config 00:32:00.527 bus/fslmc: not in enabled drivers build config 00:32:00.527 bus/ifpga: not in enabled drivers build config 00:32:00.527 bus/platform: not in enabled drivers build config 00:32:00.527 bus/vmbus: not in enabled drivers build config 00:32:00.527 common/cnxk: not in enabled drivers build config 00:32:00.527 common/mlx5: not in enabled drivers build config 00:32:00.527 common/nfp: not in enabled drivers build config 00:32:00.528 common/qat: not in enabled drivers build config 00:32:00.528 common/sfc_efx: not in enabled drivers build config 00:32:00.528 mempool/bucket: not in enabled drivers build config 00:32:00.528 mempool/cnxk: not in enabled drivers build config 00:32:00.528 mempool/dpaa: not in enabled drivers build config 00:32:00.528 mempool/dpaa2: not in enabled drivers build config 00:32:00.528 mempool/octeontx: not in enabled drivers build config 00:32:00.528 mempool/stack: not in enabled drivers build config 00:32:00.528 dma/cnxk: not in enabled drivers build config 00:32:00.528 dma/dpaa: not in enabled drivers build config 00:32:00.528 dma/dpaa2: not in enabled drivers build config 00:32:00.528 dma/hisilicon: not in enabled drivers build config 00:32:00.528 dma/idxd: not in enabled drivers build config 00:32:00.528 dma/ioat: not in enabled drivers build config 00:32:00.528 dma/skeleton: not in enabled drivers build config 00:32:00.528 net/af_packet: not in enabled drivers build config 00:32:00.528 net/af_xdp: not in enabled drivers build config 00:32:00.528 net/ark: not in enabled drivers build config 00:32:00.528 net/atlantic: not in enabled drivers build config 00:32:00.528 net/avp: not in enabled drivers build config 00:32:00.528 net/axgbe: not in enabled drivers build config 00:32:00.528 net/bnx2x: not in enabled drivers build config 00:32:00.528 net/bnxt: not in enabled drivers build config 00:32:00.528 net/bonding: not in enabled drivers build config 00:32:00.528 net/cnxk: not in enabled drivers build config 00:32:00.528 net/cpfl: not in enabled drivers build config 00:32:00.528 net/cxgbe: not in enabled drivers build config 00:32:00.528 net/dpaa: not in enabled drivers build config 00:32:00.528 net/dpaa2: not in enabled drivers build config 00:32:00.528 net/e1000: not in enabled drivers build config 00:32:00.528 net/ena: not in enabled drivers build config 00:32:00.528 net/enetc: not in enabled drivers build config 00:32:00.528 net/enetfec: not in enabled drivers build config 00:32:00.528 net/enic: not in enabled drivers build config 00:32:00.528 net/failsafe: not in enabled drivers build config 00:32:00.528 net/fm10k: not in enabled drivers build config 00:32:00.528 net/gve: not in enabled drivers build config 00:32:00.528 net/hinic: not in enabled drivers build config 00:32:00.528 net/hns3: not in enabled drivers build config 00:32:00.528 net/i40e: not in enabled drivers build config 00:32:00.528 net/iavf: not in enabled drivers build config 00:32:00.528 net/ice: not in enabled drivers build config 00:32:00.528 net/idpf: not in enabled drivers build config 00:32:00.528 net/igc: not in enabled drivers build config 00:32:00.528 net/ionic: not in enabled drivers build config 00:32:00.528 net/ipn3ke: not in enabled drivers build config 00:32:00.528 net/ixgbe: not in enabled drivers build config 00:32:00.528 net/mana: not in enabled drivers build config 00:32:00.528 net/memif: not in enabled drivers build config 00:32:00.528 net/mlx4: not in enabled drivers build config 00:32:00.528 net/mlx5: not in enabled drivers build config 00:32:00.528 net/mvneta: not in enabled drivers build config 00:32:00.528 net/mvpp2: not in enabled drivers build config 00:32:00.528 net/netvsc: not in enabled drivers build config 00:32:00.528 net/nfb: not in enabled drivers build config 00:32:00.528 net/nfp: not in enabled drivers build config 00:32:00.528 net/ngbe: not in enabled drivers build config 00:32:00.528 net/null: not in enabled drivers build config 00:32:00.528 net/octeontx: not in enabled drivers build config 00:32:00.528 net/octeon_ep: not in enabled drivers build config 00:32:00.528 net/pcap: not in enabled drivers build config 00:32:00.528 net/pfe: not in enabled drivers build config 00:32:00.528 net/qede: not in enabled drivers build config 00:32:00.528 net/ring: not in enabled drivers build config 00:32:00.528 net/sfc: not in enabled drivers build config 00:32:00.528 net/softnic: not in enabled drivers build config 00:32:00.528 net/tap: not in enabled drivers build config 00:32:00.528 net/thunderx: not in enabled drivers build config 00:32:00.528 net/txgbe: not in enabled drivers build config 00:32:00.528 net/vdev_netvsc: not in enabled drivers build config 00:32:00.528 net/vhost: not in enabled drivers build config 00:32:00.528 net/virtio: not in enabled drivers build config 00:32:00.528 net/vmxnet3: not in enabled drivers build config 00:32:00.528 raw/*: missing internal dependency, "rawdev" 00:32:00.528 crypto/armv8: not in enabled drivers build config 00:32:00.528 crypto/bcmfs: not in enabled drivers build config 00:32:00.528 crypto/caam_jr: not in enabled drivers build config 00:32:00.528 crypto/ccp: not in enabled drivers build config 00:32:00.528 crypto/cnxk: not in enabled drivers build config 00:32:00.528 crypto/dpaa_sec: not in enabled drivers build config 00:32:00.528 crypto/dpaa2_sec: not in enabled drivers build config 00:32:00.528 crypto/ipsec_mb: not in enabled drivers build config 00:32:00.528 crypto/mlx5: not in enabled drivers build config 00:32:00.528 crypto/mvsam: not in enabled drivers build config 00:32:00.528 crypto/nitrox: not in enabled drivers build config 00:32:00.528 crypto/null: not in enabled drivers build config 00:32:00.528 crypto/octeontx: not in enabled drivers build config 00:32:00.528 crypto/openssl: not in enabled drivers build config 00:32:00.528 crypto/scheduler: not in enabled drivers build config 00:32:00.528 crypto/uadk: not in enabled drivers build config 00:32:00.528 crypto/virtio: not in enabled drivers build config 00:32:00.528 compress/isal: not in enabled drivers build config 00:32:00.528 compress/mlx5: not in enabled drivers build config 00:32:00.528 compress/octeontx: not in enabled drivers build config 00:32:00.528 compress/zlib: not in enabled drivers build config 00:32:00.528 regex/*: missing internal dependency, "regexdev" 00:32:00.528 ml/*: missing internal dependency, "mldev" 00:32:00.528 vdpa/ifc: not in enabled drivers build config 00:32:00.528 vdpa/mlx5: not in enabled drivers build config 00:32:00.528 vdpa/nfp: not in enabled drivers build config 00:32:00.528 vdpa/sfc: not in enabled drivers build config 00:32:00.528 event/*: missing internal dependency, "eventdev" 00:32:00.528 baseband/*: missing internal dependency, "bbdev" 00:32:00.528 gpu/*: missing internal dependency, "gpudev" 00:32:00.528 00:32:00.528 00:32:00.528 Build targets in project: 85 00:32:00.528 00:32:00.528 DPDK 23.11.0 00:32:00.528 00:32:00.528 User defined options 00:32:00.528 default_library : static 00:32:00.528 libdir : lib 00:32:00.528 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:32:00.528 b_lto : true 00:32:00.528 b_sanitize : address 00:32:00.528 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:32:00.528 c_link_args : 00:32:00.528 cpu_instruction_set: native 00:32:00.528 disable_apps : test-pipeline,test-pmd,test-eventdev,test,test-cmdline,test-bbdev,test-sad,proc-info,graph,test-gpudev,test-crypto-perf,test-dma-perf,test-regex,test-mldev,test-acl,test-flow-perf,dumpcap,test-compress-perf,test-security-perf,test-fib,pdump 00:32:00.528 disable_libs : mldev,jobstats,bpf,rawdev,rib,stack,bbdev,lpm,pipeline,member,port,regexdev,latencystats,table,bitratestats,acl,sched,node,graph,gso,dispatcher,efd,eventdev,pdcp,fib,pcapng,cfgfile,metrics,ip_frag,gro,pdump,gpudev,distributor,ipsec 00:32:00.528 enable_docs : false 00:32:00.528 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:32:00.528 enable_kmods : false 00:32:00.528 tests : false 00:32:00.528 00:32:00.528 Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja 00:32:01.096 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:32:01.354 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:32:01.354 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:32:01.354 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:32:01.354 [4/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:32:01.354 [5/265] Linking static target lib/librte_kvargs.a 00:32:01.354 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:32:01.354 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:32:01.612 [8/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:32:01.613 [9/265] Linking static target lib/librte_log.a 00:32:01.613 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:32:01.613 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:32:01.871 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:32:02.130 [13/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:32:02.130 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:32:02.388 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:32:02.388 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:32:02.388 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:32:02.646 [18/265] Linking target lib/librte_log.so.24.0 00:32:02.646 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:32:02.646 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:32:02.646 [21/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:32:02.646 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:32:02.904 [23/265] Linking target lib/librte_kvargs.so.24.0 00:32:02.904 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:32:02.904 [25/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:32:03.163 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:32:03.163 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:32:03.163 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:32:03.163 [29/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:32:03.163 [30/265] Linking static target lib/librte_telemetry.a 00:32:03.421 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:32:03.421 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:32:03.421 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:32:03.679 [34/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:32:03.679 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:32:03.679 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:32:03.679 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:32:03.937 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:32:03.937 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:32:03.937 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:32:03.937 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:32:03.937 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:32:04.195 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:32:04.195 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:32:04.763 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:32:04.763 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:32:04.763 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:32:05.020 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:32:05.020 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:32:05.020 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:32:05.020 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:32:05.277 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:32:05.277 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:32:05.277 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:32:05.277 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:32:05.277 [56/265] Linking target lib/librte_telemetry.so.24.0 00:32:05.535 [57/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:32:05.535 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:32:05.535 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:32:05.535 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:32:05.535 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:32:05.535 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:32:05.793 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:32:05.793 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:32:06.051 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:32:06.051 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:32:06.051 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:32:06.051 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:32:06.310 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:32:06.310 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:32:06.567 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:32:06.567 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:32:06.567 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:32:06.567 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:32:06.567 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:32:06.568 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:32:06.568 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:32:06.824 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:32:07.080 [79/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:32:07.338 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:32:07.338 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:32:07.338 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:32:07.338 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:32:07.338 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:32:07.339 [85/265] Linking static target lib/librte_ring.a 00:32:07.597 [86/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:32:07.867 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:32:07.867 [88/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:32:08.189 [89/265] Linking static target lib/librte_eal.a 00:32:08.189 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:32:08.189 [91/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:32:08.189 [92/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:32:08.754 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:32:08.754 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:32:08.754 [95/265] Linking static target lib/librte_mempool.a 00:32:09.011 [96/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:32:09.011 [97/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:32:09.011 [98/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:32:09.011 [99/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:32:09.270 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:32:09.270 [101/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:32:09.270 [102/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:32:09.528 [103/265] Linking static target lib/librte_rcu.a 00:32:09.528 [104/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:32:09.786 [105/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:32:09.786 [106/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:32:09.786 [107/265] Linking static target lib/librte_meter.a 00:32:10.044 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:32:10.302 [109/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:32:10.302 [110/265] Linking static target lib/librte_net.a 00:32:10.302 [111/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:32:10.302 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:32:10.560 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:32:10.560 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:32:10.560 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:32:10.818 [116/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:32:11.077 [117/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:32:11.077 [118/265] Linking static target lib/librte_mbuf.a 00:32:11.334 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:32:11.593 [120/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:32:11.593 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:32:11.851 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:32:12.109 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:32:12.109 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:32:12.109 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:32:12.109 [126/265] Linking static target lib/librte_pci.a 00:32:12.366 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:32:12.366 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:32:12.366 [129/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:32:12.623 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:32:12.623 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:32:12.623 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:32:12.879 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:32:12.879 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:32:12.879 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:32:12.879 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:32:12.879 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:32:12.879 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:32:12.879 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:32:12.879 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:32:13.136 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:32:13.136 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:32:13.393 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:32:13.393 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:32:13.393 [145/265] Linking static target lib/librte_cmdline.a 00:32:13.649 [146/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:32:14.214 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:32:14.214 [148/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:32:14.214 [149/265] Linking static target lib/librte_timer.a 00:32:14.214 [150/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:32:14.471 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:32:14.471 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:32:14.728 [153/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:32:14.728 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:32:14.984 [155/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:32:14.984 [156/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:32:14.984 [157/265] Linking static target lib/librte_compressdev.a 00:32:14.984 [158/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:32:15.241 [159/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:32:15.498 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:32:15.498 [161/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:32:15.498 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:32:15.498 [163/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:15.498 [164/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:32:16.064 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:32:16.064 [166/265] Linking static target lib/librte_dmadev.a 00:32:16.064 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:32:16.631 [168/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:17.198 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:32:17.198 [170/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:32:17.198 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:32:17.456 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:32:17.714 [173/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:32:18.337 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:32:18.595 [175/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:32:18.595 [176/265] Linking static target lib/librte_reorder.a 00:32:19.162 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:32:19.162 [178/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:32:19.420 [179/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:32:19.420 [180/265] Linking static target lib/librte_security.a 00:32:19.679 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:32:19.679 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:32:19.937 [183/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:32:20.196 [184/265] Linking static target lib/librte_power.a 00:32:20.454 [185/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:32:21.019 [186/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:32:21.019 [187/265] Linking static target lib/librte_ethdev.a 00:32:21.019 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:32:21.278 [189/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:32:21.278 [190/265] Linking target lib/librte_eal.so.24.0 00:32:21.535 [191/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:32:21.535 [192/265] Linking static target lib/librte_cryptodev.a 00:32:21.535 [193/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:32:21.535 [194/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:32:21.794 [195/265] Linking target lib/librte_ring.so.24.0 00:32:22.052 [196/265] Linking target lib/librte_meter.so.24.0 00:32:22.052 [197/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:32:22.310 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:32:22.310 [199/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:32:22.310 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:32:22.310 [201/265] Linking static target lib/librte_hash.a 00:32:22.569 [202/265] Linking target lib/librte_pci.so.24.0 00:32:22.569 [203/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:32:23.134 [204/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:32:23.393 [205/265] Linking target lib/librte_timer.so.24.0 00:32:23.393 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:23.393 [207/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:32:23.393 [208/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:32:23.652 [209/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:32:23.652 [210/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:32:23.652 [211/265] Linking target lib/librte_mempool.so.24.0 00:32:23.910 [212/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:32:23.910 [213/265] Linking target lib/librte_dmadev.so.24.0 00:32:24.168 [214/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:32:24.168 [215/265] Linking target lib/librte_rcu.so.24.0 00:32:24.426 [216/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:32:24.426 [217/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:32:24.426 [218/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:32:24.426 [219/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:32:24.685 [220/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:32:25.251 [221/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:32:25.507 [222/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:32:25.508 [223/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:32:25.508 [224/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:32:25.765 [225/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:32:25.765 [226/265] Linking static target drivers/librte_bus_vdev.a 00:32:25.765 [227/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:32:25.765 [228/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:32:25.765 [229/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:32:26.024 [230/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:26.024 [231/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:32:26.024 [232/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:32:26.024 [233/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:32:26.282 [234/265] Linking static target drivers/librte_bus_pci.a 00:32:26.282 [235/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:32:26.282 [236/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:32:26.540 [237/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:32:26.540 [238/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:32:26.540 [239/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:32:26.540 [240/265] Linking target drivers/librte_bus_vdev.so.24.0 00:32:26.540 [241/265] Linking static target drivers/librte_mempool_ring.a 00:32:26.540 [242/265] Linking target lib/librte_mbuf.so.24.0 00:32:26.798 [243/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:32:26.798 [244/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:32:27.365 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:32:27.365 [246/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:32:27.365 [247/265] Linking target lib/librte_reorder.so.24.0 00:32:27.365 [248/265] Linking target lib/librte_compressdev.so.24.0 00:32:28.039 [249/265] Linking target lib/librte_net.so.24.0 00:32:28.039 [250/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:32:28.973 [251/265] Linking target drivers/librte_bus_pci.so.24.0 00:32:29.539 [252/265] Linking target lib/librte_cmdline.so.24.0 00:32:30.471 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:32:30.471 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:32:31.038 [255/265] Linking target lib/librte_security.so.24.0 00:32:33.564 [256/265] Linking target lib/librte_ethdev.so.24.0 00:32:33.564 [257/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:32:34.497 [258/265] Linking target lib/librte_hash.so.24.0 00:32:34.755 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:32:36.669 [260/265] Linking target lib/librte_power.so.24.0 00:32:36.928 [261/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:33:15.639 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:33:15.639 [263/265] Linking static target lib/librte_vhost.a 00:33:15.639 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:33:27.899 [265/265] Linking target lib/librte_vhost.so.24.0 00:33:27.899 INFO: autodetecting backend as ninja 00:33:27.899 INFO: calculating backend command to run: /var/spdk/dependencies/pip/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:33:29.274 CC lib/ut/ut.o 00:33:29.274 CC lib/log/log_flags.o 00:33:29.274 CC lib/log/log.o 00:33:29.274 CC lib/log/log_deprecated.o 00:33:29.274 CC lib/ut_mock/mock.o 00:33:29.274 LIB libspdk_ut_mock.a 00:33:29.274 LIB libspdk_log.a 00:33:29.274 LIB libspdk_ut.a 00:33:29.274 CC lib/util/base64.o 00:33:29.274 CC lib/util/bit_array.o 00:33:29.274 CC lib/ioat/ioat.o 00:33:29.274 CC lib/util/cpuset.o 00:33:29.274 CC lib/util/crc16.o 00:33:29.274 CC lib/util/crc32.o 00:33:29.274 CC lib/util/crc32c.o 00:33:29.274 CXX lib/trace_parser/trace.o 00:33:29.274 CC lib/dma/dma.o 00:33:29.274 CC lib/vfio_user/host/vfio_user_pci.o 00:33:29.533 CC lib/util/crc32_ieee.o 00:33:29.533 CC lib/util/crc64.o 00:33:29.533 CC lib/util/dif.o 00:33:29.533 CC lib/util/fd.o 00:33:29.533 CC lib/util/file.o 00:33:29.533 LIB libspdk_dma.a 00:33:29.533 CC lib/vfio_user/host/vfio_user.o 00:33:29.533 CC lib/util/hexlify.o 00:33:29.533 LIB libspdk_ioat.a 00:33:29.533 CC lib/util/iov.o 00:33:29.533 CC lib/util/math.o 00:33:29.533 CC lib/util/pipe.o 00:33:29.533 CC lib/util/strerror_tls.o 00:33:29.533 CC lib/util/string.o 00:33:29.533 CC lib/util/uuid.o 00:33:29.792 CC lib/util/fd_group.o 00:33:29.792 LIB libspdk_vfio_user.a 00:33:29.792 CC lib/util/xor.o 00:33:29.792 CC lib/util/zipf.o 00:33:30.051 LIB libspdk_util.a 00:33:30.051 CC lib/vmd/led.o 00:33:30.051 CC lib/vmd/vmd.o 00:33:30.051 CC lib/env_dpdk/env.o 00:33:30.051 CC lib/env_dpdk/memory.o 00:33:30.051 CC lib/env_dpdk/pci.o 00:33:30.051 CC lib/conf/conf.o 00:33:30.051 CC lib/json/json_parse.o 00:33:30.051 CC lib/idxd/idxd.o 00:33:30.051 CC lib/rdma/common.o 00:33:30.051 LIB libspdk_trace_parser.a 00:33:30.051 CC lib/idxd/idxd_user.o 00:33:30.309 CC lib/idxd/idxd_kernel.o 00:33:30.309 CC lib/json/json_util.o 00:33:30.309 CC lib/json/json_write.o 00:33:30.309 CC lib/rdma/rdma_verbs.o 00:33:30.309 LIB libspdk_conf.a 00:33:30.309 CC lib/env_dpdk/init.o 00:33:30.309 CC lib/env_dpdk/threads.o 00:33:30.309 CC lib/env_dpdk/pci_ioat.o 00:33:30.568 CC lib/env_dpdk/pci_virtio.o 00:33:30.568 LIB libspdk_vmd.a 00:33:30.568 CC lib/env_dpdk/pci_vmd.o 00:33:30.568 CC lib/env_dpdk/pci_idxd.o 00:33:30.568 LIB libspdk_idxd.a 00:33:30.568 CC lib/env_dpdk/pci_event.o 00:33:30.568 LIB libspdk_json.a 00:33:30.568 CC lib/env_dpdk/sigbus_handler.o 00:33:30.568 CC lib/env_dpdk/pci_dpdk.o 00:33:30.568 LIB libspdk_rdma.a 00:33:30.568 CC lib/env_dpdk/pci_dpdk_2207.o 00:33:30.568 CC lib/env_dpdk/pci_dpdk_2211.o 00:33:30.827 CC lib/jsonrpc/jsonrpc_server.o 00:33:30.827 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:33:30.827 CC lib/jsonrpc/jsonrpc_client.o 00:33:30.827 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:33:30.827 LIB libspdk_jsonrpc.a 00:33:31.094 CC lib/rpc/rpc.o 00:33:31.352 LIB libspdk_rpc.a 00:33:31.352 CC lib/sock/sock.o 00:33:31.352 CC lib/notify/notify.o 00:33:31.352 CC lib/sock/sock_rpc.o 00:33:31.352 CC lib/notify/notify_rpc.o 00:33:31.352 CC lib/trace/trace.o 00:33:31.352 CC lib/trace/trace_rpc.o 00:33:31.352 CC lib/trace/trace_flags.o 00:33:31.610 LIB libspdk_notify.a 00:33:31.610 LIB libspdk_trace.a 00:33:31.610 LIB libspdk_env_dpdk.a 00:33:31.868 CC lib/thread/thread.o 00:33:31.868 CC lib/thread/iobuf.o 00:33:31.868 LIB libspdk_sock.a 00:33:31.868 CC lib/nvme/nvme_ctrlr_cmd.o 00:33:31.868 CC lib/nvme/nvme_ctrlr.o 00:33:31.868 CC lib/nvme/nvme_ns.o 00:33:31.868 CC lib/nvme/nvme_qpair.o 00:33:31.868 CC lib/nvme/nvme_fabric.o 00:33:31.868 CC lib/nvme/nvme_ns_cmd.o 00:33:31.868 CC lib/nvme/nvme_pcie_common.o 00:33:31.868 CC lib/nvme/nvme_pcie.o 00:33:32.125 CC lib/nvme/nvme.o 00:33:32.687 LIB libspdk_thread.a 00:33:32.687 CC lib/nvme/nvme_quirks.o 00:33:32.944 CC lib/nvme/nvme_transport.o 00:33:32.944 CC lib/accel/accel.o 00:33:32.944 CC lib/blob/blobstore.o 00:33:32.944 CC lib/blob/request.o 00:33:33.200 CC lib/blob/zeroes.o 00:33:33.200 CC lib/nvme/nvme_discovery.o 00:33:33.200 CC lib/blob/blob_bs_dev.o 00:33:33.200 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:33:33.200 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:33:33.200 CC lib/init/json_config.o 00:33:33.456 CC lib/init/subsystem.o 00:33:33.456 CC lib/accel/accel_rpc.o 00:33:33.456 CC lib/accel/accel_sw.o 00:33:33.713 CC lib/nvme/nvme_tcp.o 00:33:33.713 CC lib/init/subsystem_rpc.o 00:33:33.713 CC lib/init/rpc.o 00:33:33.713 CC lib/nvme/nvme_opal.o 00:33:33.713 LIB libspdk_accel.a 00:33:33.713 LIB libspdk_init.a 00:33:33.713 CC lib/nvme/nvme_io_msg.o 00:33:33.713 CC lib/virtio/virtio.o 00:33:33.970 CC lib/event/app.o 00:33:33.970 CC lib/virtio/virtio_vhost_user.o 00:33:33.970 CC lib/bdev/bdev.o 00:33:33.970 CC lib/bdev/bdev_rpc.o 00:33:33.970 CC lib/virtio/virtio_vfio_user.o 00:33:34.227 CC lib/nvme/nvme_poll_group.o 00:33:34.227 CC lib/event/reactor.o 00:33:34.227 CC lib/virtio/virtio_pci.o 00:33:34.227 CC lib/bdev/bdev_zone.o 00:33:34.227 CC lib/bdev/part.o 00:33:34.484 CC lib/bdev/scsi_nvme.o 00:33:34.484 LIB libspdk_virtio.a 00:33:34.484 CC lib/event/log_rpc.o 00:33:34.484 CC lib/event/app_rpc.o 00:33:34.484 CC lib/nvme/nvme_zns.o 00:33:34.484 CC lib/event/scheduler_static.o 00:33:34.749 CC lib/nvme/nvme_cuse.o 00:33:34.749 CC lib/nvme/nvme_vfio_user.o 00:33:34.749 CC lib/nvme/nvme_rdma.o 00:33:34.749 LIB libspdk_event.a 00:33:34.749 LIB libspdk_blob.a 00:33:35.007 CC lib/blobfs/tree.o 00:33:35.007 CC lib/blobfs/blobfs.o 00:33:35.007 CC lib/lvol/lvol.o 00:33:35.571 LIB libspdk_blobfs.a 00:33:35.571 LIB libspdk_lvol.a 00:33:35.829 LIB libspdk_bdev.a 00:33:35.829 CC lib/scsi/dev.o 00:33:35.829 CC lib/scsi/scsi.o 00:33:35.829 CC lib/scsi/port.o 00:33:35.829 CC lib/scsi/lun.o 00:33:35.829 CC lib/scsi/scsi_pr.o 00:33:35.829 CC lib/nbd/nbd.o 00:33:35.829 CC lib/scsi/scsi_bdev.o 00:33:35.829 CC lib/ftl/ftl_core.o 00:33:35.829 CC lib/ublk/ublk.o 00:33:36.087 LIB libspdk_nvme.a 00:33:36.087 CC lib/scsi/scsi_rpc.o 00:33:36.087 CC lib/scsi/task.o 00:33:36.087 CC lib/nbd/nbd_rpc.o 00:33:36.087 CC lib/ublk/ublk_rpc.o 00:33:36.087 CC lib/ftl/ftl_init.o 00:33:36.087 CC lib/ftl/ftl_layout.o 00:33:36.087 CC lib/ftl/ftl_debug.o 00:33:36.345 CC lib/ftl/ftl_io.o 00:33:36.345 CC lib/nvmf/ctrlr.o 00:33:36.345 CC lib/ftl/ftl_sb.o 00:33:36.345 LIB libspdk_nbd.a 00:33:36.345 CC lib/nvmf/ctrlr_discovery.o 00:33:36.345 LIB libspdk_ublk.a 00:33:36.345 CC lib/ftl/ftl_l2p.o 00:33:36.345 CC lib/ftl/ftl_l2p_flat.o 00:33:36.345 LIB libspdk_scsi.a 00:33:36.345 CC lib/ftl/ftl_nv_cache.o 00:33:36.345 CC lib/ftl/ftl_band.o 00:33:36.345 CC lib/ftl/ftl_band_ops.o 00:33:36.602 CC lib/nvmf/ctrlr_bdev.o 00:33:36.602 CC lib/ftl/ftl_writer.o 00:33:36.602 CC lib/ftl/ftl_rq.o 00:33:36.602 CC lib/ftl/ftl_reloc.o 00:33:36.602 CC lib/ftl/ftl_l2p_cache.o 00:33:36.602 CC lib/iscsi/conn.o 00:33:36.602 CC lib/iscsi/init_grp.o 00:33:36.602 CC lib/iscsi/iscsi.o 00:33:36.859 CC lib/ftl/ftl_p2l.o 00:33:36.859 CC lib/ftl/mngt/ftl_mngt.o 00:33:36.859 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:33:36.859 CC lib/vhost/vhost.o 00:33:36.859 CC lib/vhost/vhost_rpc.o 00:33:36.859 CC lib/vhost/vhost_scsi.o 00:33:37.117 CC lib/iscsi/md5.o 00:33:37.117 CC lib/iscsi/param.o 00:33:37.117 CC lib/iscsi/portal_grp.o 00:33:37.117 CC lib/iscsi/tgt_node.o 00:33:37.117 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:33:37.117 CC lib/iscsi/iscsi_subsystem.o 00:33:37.376 CC lib/ftl/mngt/ftl_mngt_startup.o 00:33:37.376 CC lib/vhost/vhost_blk.o 00:33:37.376 CC lib/nvmf/subsystem.o 00:33:37.376 CC lib/iscsi/iscsi_rpc.o 00:33:37.376 CC lib/ftl/mngt/ftl_mngt_md.o 00:33:37.633 CC lib/iscsi/task.o 00:33:37.634 CC lib/ftl/mngt/ftl_mngt_misc.o 00:33:37.634 CC lib/vhost/rte_vhost_user.o 00:33:37.892 CC lib/nvmf/nvmf.o 00:33:37.892 CC lib/nvmf/nvmf_rpc.o 00:33:37.892 CC lib/nvmf/transport.o 00:33:37.892 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:33:37.892 LIB libspdk_iscsi.a 00:33:37.892 CC lib/nvmf/tcp.o 00:33:37.892 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:33:37.892 CC lib/ftl/mngt/ftl_mngt_band.o 00:33:37.892 CC lib/nvmf/rdma.o 00:33:38.151 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:33:38.151 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:33:38.151 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:33:38.151 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:33:38.151 CC lib/ftl/utils/ftl_conf.o 00:33:38.151 CC lib/ftl/utils/ftl_md.o 00:33:38.409 CC lib/ftl/utils/ftl_mempool.o 00:33:38.409 CC lib/ftl/utils/ftl_bitmap.o 00:33:38.409 CC lib/ftl/utils/ftl_property.o 00:33:38.409 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:33:38.409 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:33:38.409 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:33:38.409 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:33:38.409 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:33:38.667 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:33:38.667 CC lib/ftl/upgrade/ftl_sb_v3.o 00:33:38.667 CC lib/ftl/upgrade/ftl_sb_v5.o 00:33:38.667 CC lib/ftl/nvc/ftl_nvc_dev.o 00:33:38.667 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:33:38.667 CC lib/ftl/base/ftl_base_dev.o 00:33:38.667 CC lib/ftl/base/ftl_base_bdev.o 00:33:38.925 LIB libspdk_ftl.a 00:33:38.925 LIB libspdk_vhost.a 00:33:39.183 LIB libspdk_nvmf.a 00:33:39.451 CC module/env_dpdk/env_dpdk_rpc.o 00:33:39.451 CC module/scheduler/dynamic/scheduler_dynamic.o 00:33:39.451 CC module/accel/ioat/accel_ioat.o 00:33:39.451 CC module/sock/posix/posix.o 00:33:39.451 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:33:39.451 CC module/accel/dsa/accel_dsa.o 00:33:39.451 CC module/accel/error/accel_error.o 00:33:39.451 CC module/scheduler/gscheduler/gscheduler.o 00:33:39.451 CC module/blob/bdev/blob_bdev.o 00:33:39.451 CC module/accel/iaa/accel_iaa.o 00:33:39.451 LIB libspdk_env_dpdk_rpc.a 00:33:39.451 CC module/accel/error/accel_error_rpc.o 00:33:39.451 LIB libspdk_scheduler_dpdk_governor.a 00:33:39.451 LIB libspdk_scheduler_gscheduler.a 00:33:39.451 CC module/accel/iaa/accel_iaa_rpc.o 00:33:39.451 CC module/accel/dsa/accel_dsa_rpc.o 00:33:39.451 CC module/accel/ioat/accel_ioat_rpc.o 00:33:39.451 LIB libspdk_scheduler_dynamic.a 00:33:39.722 LIB libspdk_blob_bdev.a 00:33:39.722 LIB libspdk_accel_error.a 00:33:39.722 LIB libspdk_accel_iaa.a 00:33:39.722 LIB libspdk_accel_ioat.a 00:33:39.722 LIB libspdk_accel_dsa.a 00:33:39.722 CC module/bdev/lvol/vbdev_lvol.o 00:33:39.722 CC module/blobfs/bdev/blobfs_bdev.o 00:33:39.722 CC module/bdev/delay/vbdev_delay.o 00:33:39.722 CC module/bdev/error/vbdev_error.o 00:33:39.722 CC module/bdev/malloc/bdev_malloc.o 00:33:39.722 CC module/bdev/gpt/gpt.o 00:33:39.723 CC module/bdev/null/bdev_null.o 00:33:39.723 CC module/bdev/passthru/vbdev_passthru.o 00:33:39.723 CC module/bdev/nvme/bdev_nvme.o 00:33:39.982 LIB libspdk_sock_posix.a 00:33:39.982 CC module/bdev/error/vbdev_error_rpc.o 00:33:39.982 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:33:39.982 CC module/bdev/gpt/vbdev_gpt.o 00:33:39.982 CC module/bdev/delay/vbdev_delay_rpc.o 00:33:39.982 CC module/bdev/null/bdev_null_rpc.o 00:33:39.982 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:33:39.982 CC module/bdev/malloc/bdev_malloc_rpc.o 00:33:39.982 CC module/bdev/nvme/bdev_nvme_rpc.o 00:33:39.982 LIB libspdk_bdev_error.a 00:33:40.241 LIB libspdk_blobfs_bdev.a 00:33:40.241 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:33:40.241 LIB libspdk_bdev_gpt.a 00:33:40.241 LIB libspdk_bdev_delay.a 00:33:40.241 CC module/bdev/raid/bdev_raid.o 00:33:40.241 LIB libspdk_bdev_null.a 00:33:40.241 LIB libspdk_bdev_passthru.a 00:33:40.241 CC module/bdev/nvme/nvme_rpc.o 00:33:40.241 CC module/bdev/nvme/bdev_mdns_client.o 00:33:40.241 CC module/bdev/split/vbdev_split.o 00:33:40.241 LIB libspdk_bdev_malloc.a 00:33:40.241 CC module/bdev/split/vbdev_split_rpc.o 00:33:40.241 CC module/bdev/raid/bdev_raid_rpc.o 00:33:40.241 CC module/bdev/zone_block/vbdev_zone_block.o 00:33:40.500 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:33:40.500 CC module/bdev/raid/bdev_raid_sb.o 00:33:40.500 LIB libspdk_bdev_split.a 00:33:40.500 LIB libspdk_bdev_lvol.a 00:33:40.500 CC module/bdev/nvme/vbdev_opal.o 00:33:40.500 CC module/bdev/nvme/vbdev_opal_rpc.o 00:33:40.500 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:33:40.500 CC module/bdev/raid/raid0.o 00:33:40.500 CC module/bdev/raid/raid1.o 00:33:40.500 LIB libspdk_bdev_zone_block.a 00:33:40.758 CC module/bdev/raid/concat.o 00:33:40.758 CC module/bdev/aio/bdev_aio.o 00:33:40.758 CC module/bdev/aio/bdev_aio_rpc.o 00:33:40.758 CC module/bdev/ftl/bdev_ftl.o 00:33:40.758 CC module/bdev/raid/raid5f.o 00:33:40.758 CC module/bdev/iscsi/bdev_iscsi.o 00:33:40.758 CC module/bdev/virtio/bdev_virtio_scsi.o 00:33:40.758 CC module/bdev/virtio/bdev_virtio_blk.o 00:33:40.758 CC module/bdev/virtio/bdev_virtio_rpc.o 00:33:40.758 CC module/bdev/ftl/bdev_ftl_rpc.o 00:33:40.758 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:33:41.018 LIB libspdk_bdev_aio.a 00:33:41.018 LIB libspdk_bdev_iscsi.a 00:33:41.018 LIB libspdk_bdev_ftl.a 00:33:41.018 LIB libspdk_bdev_raid.a 00:33:41.275 LIB libspdk_bdev_virtio.a 00:33:41.275 LIB libspdk_bdev_nvme.a 00:33:41.533 CC module/event/subsystems/sock/sock.o 00:33:41.533 CC module/event/subsystems/iobuf/iobuf.o 00:33:41.533 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:33:41.533 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:33:41.533 CC module/event/subsystems/scheduler/scheduler.o 00:33:41.533 CC module/event/subsystems/vmd/vmd.o 00:33:41.533 CC module/event/subsystems/vmd/vmd_rpc.o 00:33:41.791 LIB libspdk_event_sock.a 00:33:41.791 LIB libspdk_event_vhost_blk.a 00:33:41.791 LIB libspdk_event_scheduler.a 00:33:41.791 LIB libspdk_event_vmd.a 00:33:41.791 LIB libspdk_event_iobuf.a 00:33:41.791 CC module/event/subsystems/accel/accel.o 00:33:42.050 LIB libspdk_event_accel.a 00:33:42.050 CC module/event/subsystems/bdev/bdev.o 00:33:42.309 LIB libspdk_event_bdev.a 00:33:42.567 CC module/event/subsystems/nbd/nbd.o 00:33:42.567 CC module/event/subsystems/scsi/scsi.o 00:33:42.567 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:33:42.567 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:33:42.567 CC module/event/subsystems/ublk/ublk.o 00:33:42.567 LIB libspdk_event_nbd.a 00:33:42.567 LIB libspdk_event_ublk.a 00:33:42.567 LIB libspdk_event_scsi.a 00:33:42.826 LIB libspdk_event_nvmf.a 00:33:42.826 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:33:42.826 CC module/event/subsystems/iscsi/iscsi.o 00:33:42.826 LIB libspdk_event_vhost_scsi.a 00:33:43.085 LIB libspdk_event_iscsi.a 00:33:43.085 CXX app/trace/trace.o 00:33:43.085 CC app/trace_record/trace_record.o 00:33:43.085 CC app/nvmf_tgt/nvmf_main.o 00:33:43.085 CC app/spdk_tgt/spdk_tgt.o 00:33:43.085 CC examples/accel/perf/accel_perf.o 00:33:43.085 CC test/bdev/bdevio/bdevio.o 00:33:43.085 CC test/accel/dif/dif.o 00:33:43.085 CC test/blobfs/mkfs/mkfs.o 00:33:43.085 CC test/app/bdev_svc/bdev_svc.o 00:33:43.344 CC app/iscsi_tgt/iscsi_tgt.o 00:33:43.344 LINK spdk_trace_record 00:33:43.344 LINK nvmf_tgt 00:33:43.344 LINK bdev_svc 00:33:43.344 LINK mkfs 00:33:43.344 LINK spdk_tgt 00:33:43.344 LINK iscsi_tgt 00:33:43.603 LINK dif 00:33:43.603 LINK bdevio 00:33:43.603 LINK accel_perf 00:33:43.603 LINK spdk_trace 00:33:50.162 CC app/spdk_lspci/spdk_lspci.o 00:33:50.420 LINK spdk_lspci 00:34:00.386 CC app/spdk_nvme_perf/perf.o 00:34:06.944 LINK spdk_nvme_perf 00:34:45.721 CC examples/bdev/hello_world/hello_bdev.o 00:34:45.979 LINK hello_bdev 00:35:12.554 CC app/spdk_nvme_identify/identify.o 00:35:15.855 LINK spdk_nvme_identify 00:35:37.777 CC app/spdk_nvme_discover/discovery_aer.o 00:35:38.714 LINK spdk_nvme_discover 00:36:46.415 CC app/spdk_top/spdk_top.o 00:36:49.693 LINK spdk_top 00:36:54.964 TEST_HEADER include/spdk/config.h 00:36:54.964 CXX test/cpp_headers/accel.o 00:36:54.964 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:36:54.964 CC test/app/histogram_perf/histogram_perf.o 00:36:55.222 CXX test/cpp_headers/accel_module.o 00:36:55.222 LINK histogram_perf 00:36:56.596 LINK nvme_fuzz 00:36:56.596 CXX test/cpp_headers/assert.o 00:36:56.854 CC test/app/jsoncat/jsoncat.o 00:36:56.854 CC test/app/stub/stub.o 00:36:57.790 CXX test/cpp_headers/barrier.o 00:36:57.790 LINK jsoncat 00:36:58.048 LINK stub 00:36:58.979 CXX test/cpp_headers/base64.o 00:37:00.357 CXX test/cpp_headers/bdev.o 00:37:02.310 CXX test/cpp_headers/bdev_module.o 00:37:03.683 CXX test/cpp_headers/bdev_zone.o 00:37:05.055 CXX test/cpp_headers/bit_array.o 00:37:06.427 CXX test/cpp_headers/bit_pool.o 00:37:07.797 CXX test/cpp_headers/blob.o 00:37:09.694 CXX test/cpp_headers/blob_bdev.o 00:37:11.070 CXX test/cpp_headers/blobfs.o 00:37:11.329 CXX test/cpp_headers/blobfs_bdev.o 00:37:13.236 CC test/dma/test_dma/test_dma.o 00:37:13.236 CXX test/cpp_headers/conf.o 00:37:14.612 CXX test/cpp_headers/config.o 00:37:14.612 CXX test/cpp_headers/cpuset.o 00:37:15.176 LINK test_dma 00:37:15.176 CXX test/cpp_headers/crc16.o 00:37:15.741 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:37:16.308 CXX test/cpp_headers/crc32.o 00:37:16.566 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:37:17.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:37:17.760 CXX test/cpp_headers/crc64.o 00:37:19.133 CXX test/cpp_headers/dif.o 00:37:19.701 LINK vhost_fuzz 00:37:19.959 CXX test/cpp_headers/dma.o 00:37:21.334 CXX test/cpp_headers/endian.o 00:37:21.900 LINK iscsi_fuzz 00:37:22.159 CXX test/cpp_headers/env.o 00:37:23.553 CXX test/cpp_headers/env_dpdk.o 00:37:24.490 CXX test/cpp_headers/event.o 00:37:25.867 CXX test/cpp_headers/fd.o 00:37:26.800 CXX test/cpp_headers/fd_group.o 00:37:27.058 CC examples/bdev/bdevperf/bdevperf.o 00:37:27.989 CXX test/cpp_headers/file.o 00:37:29.407 CXX test/cpp_headers/ftl.o 00:37:30.778 LINK bdevperf 00:37:30.778 CXX test/cpp_headers/gpt_spec.o 00:37:32.682 CXX test/cpp_headers/hexlify.o 00:37:33.250 CXX test/cpp_headers/histogram_data.o 00:37:34.627 CXX test/cpp_headers/idxd.o 00:37:34.886 CXX test/cpp_headers/idxd_spec.o 00:37:36.263 CXX test/cpp_headers/init.o 00:37:37.200 CC examples/blob/hello_world/hello_blob.o 00:37:37.768 CXX test/cpp_headers/ioat.o 00:37:38.703 LINK hello_blob 00:37:39.270 CXX test/cpp_headers/ioat_spec.o 00:37:40.646 CXX test/cpp_headers/iscsi_spec.o 00:37:42.019 CXX test/cpp_headers/json.o 00:37:43.391 CXX test/cpp_headers/jsonrpc.o 00:37:44.785 CXX test/cpp_headers/likely.o 00:37:44.785 CC app/vhost/vhost.o 00:37:45.717 LINK vhost 00:37:45.717 CXX test/cpp_headers/log.o 00:37:47.091 CXX test/cpp_headers/lvol.o 00:37:48.466 CXX test/cpp_headers/memory.o 00:37:49.032 CXX test/cpp_headers/mmio.o 00:37:49.601 CC examples/ioat/perf/perf.o 00:37:50.169 CXX test/cpp_headers/nbd.o 00:37:50.169 LINK ioat_perf 00:37:50.170 CXX test/cpp_headers/notify.o 00:37:51.546 CXX test/cpp_headers/nvme.o 00:37:52.482 CXX test/cpp_headers/nvme_intel.o 00:37:53.419 CXX test/cpp_headers/nvme_ocssd.o 00:37:54.846 CXX test/cpp_headers/nvme_ocssd_spec.o 00:37:56.221 CXX test/cpp_headers/nvme_spec.o 00:37:57.158 CXX test/cpp_headers/nvme_zns.o 00:37:57.726 CXX test/cpp_headers/nvmf.o 00:37:57.726 CXX test/cpp_headers/nvmf_cmd.o 00:37:58.294 CC examples/ioat/verify/verify.o 00:37:59.230 CXX test/cpp_headers/nvmf_fc_spec.o 00:37:59.231 CC examples/nvme/hello_world/hello_world.o 00:37:59.490 LINK verify 00:38:00.424 CXX test/cpp_headers/nvmf_spec.o 00:38:00.682 LINK hello_world 00:38:00.941 CXX test/cpp_headers/nvmf_transport.o 00:38:02.318 CC examples/blob/cli/blobcli.o 00:38:02.576 CXX test/cpp_headers/opal.o 00:38:03.953 CXX test/cpp_headers/opal_spec.o 00:38:04.887 LINK blobcli 00:38:05.456 CXX test/cpp_headers/pci_ids.o 00:38:06.827 CXX test/cpp_headers/pipe.o 00:38:08.725 CXX test/cpp_headers/queue.o 00:38:08.725 CXX test/cpp_headers/reduce.o 00:38:10.103 CXX test/cpp_headers/rpc.o 00:38:11.490 CXX test/cpp_headers/scheduler.o 00:38:13.391 CXX test/cpp_headers/scsi.o 00:38:15.315 CXX test/cpp_headers/scsi_spec.o 00:38:16.690 CXX test/cpp_headers/sock.o 00:38:18.066 CXX test/cpp_headers/stdinc.o 00:38:19.442 CXX test/cpp_headers/string.o 00:38:20.009 CC test/env/mem_callbacks/mem_callbacks.o 00:38:20.576 CXX test/cpp_headers/thread.o 00:38:21.144 CC examples/sock/hello_world/hello_sock.o 00:38:22.081 CXX test/cpp_headers/trace.o 00:38:22.647 LINK hello_sock 00:38:23.214 CXX test/cpp_headers/trace_parser.o 00:38:24.589 CXX test/cpp_headers/tree.o 00:38:24.589 LINK mem_callbacks 00:38:24.589 CXX test/cpp_headers/ublk.o 00:38:25.959 CXX test/cpp_headers/util.o 00:38:27.857 CXX test/cpp_headers/uuid.o 00:38:29.228 CXX test/cpp_headers/version.o 00:38:29.485 CXX test/cpp_headers/vfio_user_pci.o 00:38:30.858 CXX test/cpp_headers/vfio_user_spec.o 00:38:32.759 CXX test/cpp_headers/vhost.o 00:38:34.666 CXX test/cpp_headers/vmd.o 00:38:36.041 CC examples/vmd/lsvmd/lsvmd.o 00:38:36.041 CXX test/cpp_headers/xor.o 00:38:37.418 LINK lsvmd 00:38:37.985 CXX test/cpp_headers/zipf.o 00:38:40.562 CC examples/nvmf/nvmf/nvmf.o 00:38:43.171 LINK nvmf 00:39:15.241 CC test/env/vtophys/vtophys.o 00:39:15.241 CC examples/nvme/reconnect/reconnect.o 00:39:15.241 LINK vtophys 00:39:15.241 LINK reconnect 00:39:21.802 CC examples/vmd/led/led.o 00:39:23.720 LINK led 00:40:02.412 CC examples/util/zipf/zipf.o 00:40:02.412 LINK zipf 00:40:08.971 CC test/event/event_perf/event_perf.o 00:40:09.230 LINK event_perf 00:40:09.843 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:40:11.215 LINK env_dpdk_post_init 00:40:37.763 CC examples/thread/thread/thread_ex.o 00:40:37.763 CC test/event/reactor/reactor.o 00:40:37.763 CC examples/nvme/nvme_manage/nvme_manage.o 00:40:37.763 LINK reactor 00:40:37.763 LINK thread 00:40:38.331 CC test/event/reactor_perf/reactor_perf.o 00:40:38.898 LINK nvme_manage 00:40:39.157 LINK reactor_perf 00:40:44.462 CC test/event/app_repeat/app_repeat.o 00:40:44.718 LINK app_repeat 00:41:02.791 CC test/event/scheduler/scheduler.o 00:41:02.791 CC app/spdk_dd/spdk_dd.o 00:41:03.359 LINK scheduler 00:41:05.887 LINK spdk_dd 00:41:12.497 CC test/env/memory/memory_ut.o 00:41:17.764 CC test/env/pci/pci_ut.o 00:41:17.764 LINK memory_ut 00:41:18.332 LINK pci_ut 00:41:22.519 CC app/fio/nvme/fio_plugin.o 00:41:23.895 CC examples/nvme/arbitration/arbitration.o 00:41:24.461 CC app/fio/bdev/fio_plugin.o 00:41:24.719 LINK spdk_nvme 00:41:26.618 LINK arbitration 00:41:27.988 LINK spdk_bdev 00:41:46.083 CC examples/nvme/hotplug/hotplug.o 00:41:46.083 LINK hotplug 00:41:50.306 CC test/lvol/esnap/esnap.o 00:41:54.493 CC examples/idxd/perf/perf.o 00:41:56.533 LINK idxd_perf 00:42:11.414 CC examples/interrupt_tgt/interrupt_tgt.o 00:42:11.672 LINK interrupt_tgt 00:42:15.879 LINK esnap 00:42:42.452 CC examples/nvme/cmb_copy/cmb_copy.o 00:42:42.452 LINK cmb_copy 00:42:47.721 CC examples/nvme/abort/abort.o 00:42:51.004 LINK abort 00:43:03.208 CC test/nvme/aer/aer.o 00:43:04.588 CC test/nvme/reset/reset.o 00:43:04.846 LINK aer 00:43:06.739 LINK reset 00:43:38.798 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:43:38.798 LINK pmr_persistence 00:44:00.726 CC test/nvme/sgl/sgl.o 00:44:00.726 LINK sgl 00:44:07.314 CC test/nvme/e2edp/nvme_dp.o 00:44:07.314 CC test/rpc_client/rpc_client_test.o 00:44:07.571 CC test/thread/poller_perf/poller_perf.o 00:44:07.829 LINK nvme_dp 00:44:08.393 LINK rpc_client_test 00:44:08.958 LINK poller_perf 00:44:17.093 CC test/nvme/overhead/overhead.o 00:44:18.469 LINK overhead 00:44:18.469 CC test/nvme/err_injection/err_injection.o 00:44:19.846 LINK err_injection 00:44:20.104 CC test/nvme/startup/startup.o 00:44:21.481 LINK startup 00:44:48.046 CC test/nvme/reserve/reserve.o 00:44:48.046 LINK reserve 00:44:49.424 CC test/nvme/simple_copy/simple_copy.o 00:44:51.326 LINK simple_copy 00:44:57.913 CC test/thread/lock/spdk_lock.o 00:44:58.552 CC test/nvme/connect_stress/connect_stress.o 00:44:59.927 LINK connect_stress 00:45:03.217 LINK spdk_lock 00:45:11.321 CC test/nvme/boot_partition/boot_partition.o 00:45:12.257 LINK boot_partition 00:45:18.819 CC test/nvme/compliance/nvme_compliance.o 00:45:20.721 LINK nvme_compliance 00:45:24.010 CC test/nvme/fused_ordering/fused_ordering.o 00:45:24.300 CC test/nvme/doorbell_aers/doorbell_aers.o 00:45:25.674 LINK fused_ordering 00:45:25.674 LINK doorbell_aers 00:45:43.774 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:45:43.774 LINK histogram_ut 00:45:46.314 CC test/nvme/fdp/fdp.o 00:45:48.211 CC test/unit/lib/accel/accel.c/accel_ut.o 00:45:48.211 LINK fdp 00:45:52.397 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:45:52.964 CC test/nvme/cuse/cuse.o 00:45:53.224 CC test/unit/lib/bdev/part.c/part_ut.o 00:45:55.160 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:45:55.420 LINK accel_ut 00:45:56.354 LINK scsi_nvme_ut 00:45:58.888 LINK cuse 00:46:03.082 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:46:04.983 LINK part_ut 00:46:04.983 LINK gpt_ut 00:46:10.273 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:46:10.273 LINK bdev_ut 00:46:10.273 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:46:11.652 CC test/unit/lib/blob/blob.c/blob_ut.o 00:46:11.911 LINK blob_bdev_ut 00:46:12.170 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:46:12.429 LINK tree_ut 00:46:12.996 LINK vbdev_lvol_ut 00:46:13.255 CC test/unit/lib/dma/dma.c/dma_ut.o 00:46:14.633 LINK dma_ut 00:46:14.892 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:46:15.150 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:46:15.410 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:46:15.977 LINK blobfs_bdev_ut 00:46:17.354 CC test/unit/lib/event/app.c/app_ut.o 00:46:17.921 LINK blobfs_sync_ut 00:46:17.921 LINK blobfs_async_ut 00:46:17.921 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:46:18.180 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:46:18.180 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:46:18.437 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:46:18.437 LINK app_ut 00:46:19.370 LINK ioat_ut 00:46:20.308 LINK reactor_ut 00:46:22.840 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:46:23.777 LINK bdev_raid_ut 00:46:25.678 LINK conn_ut 00:46:26.242 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:46:27.176 LINK blob_ut 00:46:28.111 LINK bdev_ut 00:46:28.685 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:46:29.250 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:46:29.509 LINK bdev_zone_ut 00:46:30.083 LINK jsonrpc_server_ut 00:46:30.341 LINK json_parse_ut 00:46:30.341 CC test/unit/lib/log/log.c/log_ut.o 00:46:30.598 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:46:30.857 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:46:31.422 LINK log_ut 00:46:31.989 LINK bdev_raid_sb_ut 00:46:32.556 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:46:33.123 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:46:33.382 LINK init_grp_ut 00:46:34.317 CC test/unit/lib/iscsi/param.c/param_ut.o 00:46:34.884 LINK lvol_ut 00:46:34.884 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:46:35.141 LINK param_ut 00:46:36.074 LINK portal_grp_ut 00:46:36.640 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:46:36.640 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:46:37.575 CC test/unit/lib/notify/notify.c/notify_ut.o 00:46:37.575 LINK iscsi_ut 00:46:37.834 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:46:37.834 LINK concat_ut 00:46:38.094 LINK notify_ut 00:46:38.094 LINK tgt_node_ut 00:46:39.031 LINK raid1_ut 00:46:39.598 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:46:39.857 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:46:40.425 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:46:41.009 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:46:41.598 LINK raid5f_ut 00:46:42.975 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:46:42.975 CC test/unit/lib/sock/sock.c/sock_ut.o 00:46:43.234 CC test/unit/lib/thread/thread.c/thread_ut.o 00:46:43.234 LINK dev_ut 00:46:43.493 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:46:43.751 LINK nvme_ut 00:46:44.316 LINK sock_ut 00:46:44.575 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:46:44.575 LINK tcp_ut 00:46:44.833 LINK nvme_ctrlr_ut 00:46:45.091 LINK nvme_ctrlr_cmd_ut 00:46:45.091 LINK thread_ut 00:46:45.091 LINK lun_ut 00:46:46.468 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:46:46.727 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:46:47.296 CC test/unit/lib/sock/posix.c/posix_ut.o 00:46:48.671 LINK vbdev_zone_block_ut 00:46:49.606 LINK posix_ut 00:46:50.170 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:46:50.478 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:46:50.478 LINK scsi_ut 00:46:50.737 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:46:51.302 LINK iobuf_ut 00:46:51.559 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:46:51.816 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:46:51.816 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:46:52.383 LINK json_util_ut 00:46:52.641 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:46:52.641 LINK scsi_bdev_ut 00:46:52.641 LINK nvme_ctrlr_ocssd_cmd_ut 00:46:52.900 CC test/unit/lib/util/base64.c/base64_ut.o 00:46:52.900 LINK json_write_ut 00:46:52.900 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:46:53.158 LINK base64_ut 00:46:53.158 LINK scsi_pr_ut 00:46:53.417 LINK bdev_nvme_ut 00:46:54.353 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:46:54.353 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:46:54.919 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:46:54.919 LINK pci_event_ut 00:46:54.919 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:46:55.482 LINK bit_array_ut 00:46:55.482 LINK ctrlr_ut 00:46:55.749 LINK subsystem_ut 00:46:56.329 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:46:56.587 LINK nvme_ns_ut 00:46:56.846 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:46:57.104 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:46:57.363 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:46:57.363 LINK rpc_ut 00:46:57.621 LINK idxd_user_ut 00:46:58.557 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:46:58.557 LINK idxd_ut 00:46:59.125 LINK cpuset_ut 00:47:00.063 CC test/unit/lib/rdma/common.c/common_ut.o 00:47:00.063 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:47:00.063 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:47:00.321 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:47:00.321 LINK common_ut 00:47:00.321 LINK crc16_ut 00:47:00.321 LINK crc32_ieee_ut 00:47:00.321 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:47:00.321 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:47:00.580 LINK vhost_ut 00:47:00.580 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:47:00.839 LINK ftl_l2p_ut 00:47:00.839 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:47:00.839 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:47:01.098 LINK ftl_io_ut 00:47:01.098 LINK crc32c_ut 00:47:01.098 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:47:01.098 LINK ftl_bitmap_ut 00:47:01.666 LINK ftl_band_ut 00:47:01.666 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:47:01.666 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:47:02.235 LINK ftl_mempool_ut 00:47:02.494 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:47:02.494 LINK nvme_ns_cmd_ut 00:47:02.752 LINK crc64_ut 00:47:03.011 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:47:03.268 CC test/unit/lib/util/dif.c/dif_ut.o 00:47:03.527 CC test/unit/lib/util/iov.c/iov_ut.o 00:47:03.527 LINK subsystem_ut 00:47:03.527 LINK ctrlr_discovery_ut 00:47:03.786 LINK ftl_mngt_ut 00:47:03.786 LINK iov_ut 00:47:03.786 CC test/unit/lib/util/math.c/math_ut.o 00:47:03.786 LINK math_ut 00:47:04.353 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:47:04.612 LINK dif_ut 00:47:05.176 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:47:05.434 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:47:06.001 CC test/unit/lib/util/string.c/string_ut.o 00:47:06.260 LINK pipe_ut 00:47:06.519 LINK string_ut 00:47:06.519 CC test/unit/lib/util/xor.c/xor_ut.o 00:47:06.777 LINK ftl_sb_ut 00:47:06.777 LINK xor_ut 00:47:07.344 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:47:07.603 LINK nvme_ns_ocssd_cmd_ut 00:47:07.603 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:47:07.862 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:47:07.862 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:47:07.862 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:47:08.121 LINK ctrlr_bdev_ut 00:47:08.121 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:47:08.121 LINK ftl_layout_upgrade_ut 00:47:08.380 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:47:09.034 LINK nvmf_ut 00:47:09.034 LINK nvme_poll_group_ut 00:47:09.034 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:47:09.294 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:47:09.294 LINK nvme_qpair_ut 00:47:09.294 LINK nvme_pcie_ut 00:47:10.228 LINK nvme_quirks_ut 00:47:10.228 LINK rdma_ut 00:47:11.162 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:47:11.420 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:47:11.678 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:47:11.937 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:47:12.871 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:47:12.871 LINK nvme_transport_ut 00:47:12.871 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:47:12.871 LINK nvme_tcp_ut 00:47:12.871 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:47:13.438 LINK nvme_io_msg_ut 00:47:13.698 LINK nvme_opal_ut 00:47:14.266 LINK nvme_fabric_ut 00:47:14.266 LINK nvme_pcie_common_ut 00:47:14.833 LINK transport_ut 00:47:15.090 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:47:15.657 LINK nvme_rdma_ut 00:47:17.558 LINK nvme_cuse_ut 00:48:39.051 22:10:56 -- spdk/autopackage.sh@44 -- $ make -j10 clean 00:48:39.051 make[1]: Nothing to be done for 'clean'. 00:48:40.422 22:11:00 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:48:40.422 22:11:00 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:48:40.422 22:11:00 -- common/autotest_common.sh@10 -- $ set +x 00:48:40.422 22:11:00 -- spdk/autopackage.sh@48 -- $ timing_finish 00:48:40.422 22:11:00 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:40.422 22:11:00 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:48:40.422 22:11:00 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:40.422 + [[ -n 2352 ]] 00:48:40.422 + sudo kill 2352 00:48:40.430 [Pipeline] } 00:48:40.443 [Pipeline] // timeout 00:48:40.447 [Pipeline] } 00:48:40.458 [Pipeline] // stage 00:48:40.462 [Pipeline] } 00:48:40.473 [Pipeline] // catchError 00:48:40.479 [Pipeline] stage 00:48:40.481 [Pipeline] { (Stop VM) 00:48:40.491 [Pipeline] sh 00:48:40.766 + vagrant halt 00:48:44.049 ==> default: Halting domain... 00:48:49.340 [Pipeline] sh 00:48:49.617 + vagrant destroy -f 00:48:52.900 ==> default: Removing domain... 00:48:53.170 [Pipeline] sh 00:48:53.542 + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output 00:48:53.551 [Pipeline] } 00:48:53.567 [Pipeline] // stage 00:48:53.572 [Pipeline] } 00:48:53.586 [Pipeline] // dir 00:48:53.592 [Pipeline] } 00:48:53.607 [Pipeline] // wrap 00:48:53.614 [Pipeline] } 00:48:53.626 [Pipeline] // catchError 00:48:53.635 [Pipeline] stage 00:48:53.638 [Pipeline] { (Epilogue) 00:48:53.651 [Pipeline] sh 00:48:53.931 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:12.025 [Pipeline] catchError 00:49:12.028 [Pipeline] { 00:49:12.043 [Pipeline] sh 00:49:12.323 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:12.581 Artifacts sizes are good 00:49:12.590 [Pipeline] } 00:49:12.605 [Pipeline] // catchError 00:49:12.617 [Pipeline] archiveArtifacts 00:49:12.624 Archiving artifacts 00:49:12.970 [Pipeline] cleanWs 00:49:12.983 [WS-CLEANUP] Deleting project workspace... 00:49:12.983 [WS-CLEANUP] Deferred wipeout is used... 00:49:12.989 [WS-CLEANUP] done 00:49:12.991 [Pipeline] } 00:49:13.007 [Pipeline] // stage 00:49:13.013 [Pipeline] } 00:49:13.027 [Pipeline] // node 00:49:13.032 [Pipeline] End of Pipeline 00:49:13.069 Finished: SUCCESS